Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

20 Feb 2026 14:00h - 15:00h

Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

Session at a glance

Summary

This panel discussion focused on bridging India’s AI policy vision with practical enterprise deployment needs, featuring insights from both government infrastructure and private sector perspectives. The conversation was moderated by Aman Khanna from the Asia Group and included Dr. Vivek Khaneja from CDAC (Center for Development of Advanced Computing) and Nitin Bajaj from Intel India.


Dr. Khaneja outlined CDAC’s role in building India’s supercomputing infrastructure through the PARAM series, which currently provides 48 petaflops of computing capacity used by approximately 15,000 researchers for applications ranging from drug discovery to weather prediction. He explained that this capacity will expand to 100 petaflops by year-end across 60 installations. Bajaj discussed the challenges enterprises face in scaling AI from pilot projects to production, citing issues with ROI calculations, deployment model decisions (cloud versus on-premise), and the rapid pace of technological change that makes strategic planning difficult.


Both panelists acknowledged significant barriers in moving from proof-of-concepts to real-world deployments, particularly when dealing with uncurated data and complex operational environments. On the sovereignty question, Dr. Khaneja advocated for a pragmatic approach where India controls critical software layers while sourcing hardware globally, noting CDAC’s ambitious goal to develop indigenous GPUs by 2029-30. The discussion revealed that data sovereignty concerns vary by industry, with banking and healthcare prioritizing local control while other sectors are more flexible about cloud deployment.


The panelists identified talent gaps as a continuing challenge, particularly in practical MLOps deployment skills versus theoretical knowledge. They emphasized energy efficiency as crucial for sustainable AI scaling, with both organizations implementing advanced cooling technologies and power-aware designs. Success metrics for India’s AI future included widespread workflow integration and democratized access that could benefit even small-scale entrepreneurs like vegetable vendors.


Keypoints

Major Discussion Points:

India’s AI Infrastructure and Capabilities: Discussion of CDAC’s PARAM supercomputing series providing 48 petaflops of compute capacity used by 15,000 researchers for applications like drug discovery, weather prediction, and molecular modeling, with plans to expand to 100 petaflops by year-end.


Enterprise AI Adoption Challenges: Exploration of barriers preventing Indian enterprises from scaling AI from pilot projects to production, including ROI concerns, deployment model decisions (on-premise vs. cloud vs. edge), and the rapid pace of AI technology evolution making strategic choices difficult.


AI Sovereignty vs. Global Dependencies: Candid examination of India’s AI sovereignty aspirations versus practical reliance on global technology (chips, systems, software), with discussion of pragmatic approaches focusing on controlling critical chokepoints while sourcing silicon globally.


Talent and Skills Gap: Assessment of India’s AI talent landscape, highlighting the gap between theoretical knowledge and practical deployment skills, particularly in MLOps and real-world data handling, while noting India’s demographic advantage with its young, AI-exposed population.


Energy and Sustainability Concerns: Discussion of power consumption challenges in AI infrastructure, covering technical solutions like liquid cooling, power-aware designs, and the need for energy-efficient models and deployment strategies.


Overall Purpose:

The discussion aimed to bridge the gap between India’s ambitious AI policy announcements and infrastructure investments with the practical realities of enterprise AI deployment and scaling, examining where government vision aligns with or diverges from actual market needs.


Overall Tone:

The conversation maintained a consistently pragmatic and candid tone throughout. Both panelists were refreshingly honest about challenges and limitations, avoiding typical conference hyperbole. The moderator facilitated substantive technical and policy discussions, and the tone remained collaborative and solution-oriented, with speakers building on each other’s points rather than promoting competing agendas.


Speakers

Speakers from the provided list:


Aman Khanna – Partner and Managing Director for India at the Asia Group, moderator of the panel discussion


Dr. Vivek Khaneja – Executive Director of the Center for Development of Advanced Computing (CDAC), expertise in supercomputing infrastructure, high-performance computing, and cybersecurity


Nitin Bajaj – Director of Sales for Conglomerate Accounts at Intel India, expertise in enterprise AI adoption, digital transformation, and technology leadership with 28+ years of global experience


Additional speakers:


Sangeeta Reddy – Joint Managing Director, Apollo Hospitals (mentioned at the end to give remarks, but did not participate in the main discussion)


Full session report

This panel discussion at a major technology conference examined India’s artificial intelligence ambitions through the lens of both government infrastructure development and enterprise deployment realities. Moderated by Aman Khanna from the Asia Group, the conversation brought together Dr. Vivek Khaneja from the Centre for Development of Advanced Computing (CDAC), representing India’s national AI infrastructure efforts, and Nitin Bajaj from Intel India, offering insights from enterprise AI adoption experiences.


India’s AI Infrastructure Foundation

Dr. Khaneja outlined CDAC’s substantial progress in building India’s supercomputing backbone through the PARAM series. The current infrastructure provides 48 petaflops of computing capacity distributed across the National Knowledge Network (NKN), serving approximately 15,000 researchers nationwide. This capacity supports diverse computationally intensive applications including drug discovery, bioinformatics, protein folding, molecular modelling, weather prediction, oil exploration, finite element modelling, and computational fluid dynamics.


The infrastructure extends beyond academic research to include small and medium enterprises (MSMEs) and start-ups through the Paramuthkarsh facility at CDAC’s Bangalore centre. This democratisation of high-performance computing resources represents a strategic approach to fostering innovation across India’s technology ecosystem. The planned expansion to 100 petaflops by year-end, distributed across 60 installations, demonstrates the government’s commitment to scaling AI infrastructure capabilities.


Dr. Khaneja also revealed CDAC’s ambitious project to develop indigenous GPUs based on RISC-V architecture, with a target completion date of 2029-30, representing a significant step toward technological self-reliance.


Enterprise AI Adoption Challenges

Bajaj’s perspective revealed significant challenges in translating AI potential into production-scale deployments. Despite widespread enthusiasm and substantial investments in AI infrastructure over the past year, most organisations remain trapped in pilot phases. This bottleneck stems from multiple interconnected factors.


The primary challenge lies in deployment complexity. Enterprises must navigate choices between on-premise, cloud, and edge deployments whilst evaluating different silicon options and calculating return on investment. The rapid evolution of AI models compounds this decision paralysis, as organisations fear making investments that may quickly become obsolete.


Dr. Khaneja provided insight into why proof-of-concepts fail to scale, noting that whilst organisations achieve impressive results with curated datasets in controlled environments, production deployment presents different challenges. Real-world environments involve messy, incomplete, or skewed data requiring extensive preprocessing. Additionally, most organisations lack MLOps expertise necessary for managing AI systems at scale, having developed capabilities primarily through standardised test cases rather than production scenarios.


Practical AI Applications and Use Cases

Bajaj outlined specific enterprise use cases where AI deployment is gaining traction: smart manufacturing, smart retail, document search, surveillance, inventory management, customer analytics, and theft prevention. He emphasised the evolution from Large Language Models (LLMs) to Small Language Models (SLMs) as organisations seek more targeted, efficient solutions for specific applications.


The discussion highlighted how different industries approach AI deployment based on their specific requirements and constraints, with some prioritising performance whilst others focus on data sovereignty and regulatory compliance.


The Frugal AI Approach

Bajaj introduced the concept of “frugal AI”—matching performance requirements to appropriate hardware capabilities rather than defaulting to maximum-performance solutions. He illustrated this with document search applications, where human reading speed provides a baseline for performance requirements. If humans process approximately 10 prompts per second, then systems delivering 15-20 prompts per second may be adequate, eliminating the need for expensive systems capable of processing 200 prompts per second.


This approach becomes particularly relevant given that modern processors incorporate GPU and NPU capabilities that can handle many AI workloads effectively. Intel’s latest processors can run 7-8 billion parameter models on edge devices and up to 80 billion parameter models in data centre environments, potentially eliminating GPU requirements for many use cases.


Pragmatic Approach to Technological Sovereignty

The sovereignty discussion revealed a refreshingly practical approach from both speakers. Dr. Khaneja acknowledged that whilst complete technological independence remains aspirational, it is neither immediately feasible nor necessarily optimal for India’s current capabilities.


He advocated for a layered approach to sovereignty, focusing on controlling critical chokepoints whilst accepting strategic dependencies. This framework suggests India should concentrate on developing capabilities in software stacks, model development, orchestration systems, and applications whilst sourcing hardware components globally when appropriate.


The speakers recognised that data sovereignty requirements vary significantly across industries. Banking and healthcare sectors prioritise local data control due to regulatory requirements, often accepting performance trade-offs. Conversely, manufacturing and retail enterprises frequently prioritise cloud deployment for superior performance and faster development cycles.


Talent Development Challenges and Opportunities

The speakers presented nuanced perspectives on India’s AI talent landscape. Dr. Khaneja identified gaps between theoretical knowledge and practical deployment skills among engineering graduates. Whilst students demonstrate strong mathematical foundations and machine learning concepts, they lack experience with real-world challenges such as data cleaning, MLOps implementation, and production deployment constraints.


He emphasised the need for curriculum changes in colleges to include practical MLOps training, moving beyond theoretical frameworks to hands-on experience with production environments.


Bajaj offered a more optimistic long-term assessment, emphasising India’s demographic advantages. With a young, digitally native population, he argued that current capability gaps will resolve within 2-4 years as this generation develops practical skills through exposure to AI technologies.


Energy Efficiency and Sustainability

Both speakers acknowledged energy consumption as a critical constraint for AI scaling. Dr. Khaneja outlined technical approaches including power-aware hardware designs and advanced cooling technologies. CDAC’s infrastructure improvements focus on transitioning to liquid cooling systems, achieving better power usage effectiveness compared to traditional air cooling.


Bajaj emphasised Intel’s contributions through manufacturing improvements, including advanced technologies that improve power efficiency. He highlighted the need for standardised benchmarking of energy consumption per token for both training and inference operations to enable meaningful comparisons across different models and deployment approaches.


Vision for AI Success

The speakers’ definitions of success emphasised inclusive, practical AI deployment. Dr. Khaneja focused on AI integration into actual workflows to improve daily life experiences, moving beyond demonstrations to practical utility.


Bajaj articulated a vision of democratised AI access, drawing parallels to India’s transformation in data usage globally. He envisioned AI deployment reaching grassroots levels, enabling small-scale entrepreneurs to leverage AI for business improvement, positioning AI as a tool for social mobility and economic empowerment.


Bridging Infrastructure and Implementation

The discussion successfully bridged policy aspirations with implementation realities, revealing both alignment and tensions between government infrastructure development and enterprise needs. The speakers demonstrated consensus on fundamental challenges whilst offering complementary perspectives on solutions.


Their pragmatic approach to sovereignty, emphasis on energy efficiency, and focus on inclusive success metrics suggest a mature understanding of AI deployment challenges. The conversation’s technical depth and candid assessment of current limitations provided valuable insights for navigating India’s AI transformation.


Rather than offering simplistic solutions, the speakers acknowledged complexity whilst providing frameworks for thinking about trade-offs and strategic choices. This approach offers a model for emerging economies pursuing AI development strategies whilst managing resource constraints and global technological dependencies.


The discussion highlighted that success in AI deployment requires not just infrastructure investment, but also practical solutions to enterprise challenges, realistic approaches to technological sovereignty, and inclusive frameworks that extend AI benefits beyond large enterprises to smaller organisations and individual entrepreneurs.


Session transcript

Aman Khanna

India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managing director for India at the Asia Group. Also my privilege to be a moderator here today. I have to say the energy here is still so palpable, even after five days of this. So it’s absolutely brilliant to be here with you all. We’ve had a truly fascinating moment in India’s AI ambitions. Some massive policy announcements. We just had Pax Silica announced this morning. Very exciting indeed. Significant infrastructure investments. Brad Smith, if you heard him yesterday, Microsoft announced $50 billion in the global south alone. After $20 billion, that has been announced for India. Google, as you know, $15 billion.

And growing enterprise adoption. Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all watching this. But announcement is one thing. Deployment and then achieving scale quite enough. So that’s why this panel matters. Today’s conversation brings two critical perspectives to this fundamental question. What does it take to translate that vision into adoption and scale? One of my two distinguished speakers brings unparalleled insight into infrastructure being built through national R &D institutions. The other sees the reality of what India’s largest enterprises actually need when they deploy AI. So we are here to have an honest conversation about these two tracks. Where do they connect and perhaps where they don’t. So with that let me introduce my distinguished panelists.

First I have to my immediate left Mr. Vivek Kanneja. Vivek is the executive director of the Center for Development of Advanced Computing, CDAC as it’s known. And CDAC has built the Parham supercomputing series providing AI compute infrastructure for government departments and national missions. It conducts cutting -edge research into high -performance computing and cybersecurity. and also trains thousands of engineers annually in advanced computing and AI. Vivek, of course, has held multiple senior leadership positions within CDAC and has guided national initiatives in these critical areas. Welcome, Vivek. I also have to my far left, Nitin Bajaj. Nitin is Director of Sales for Conglomerate Accounts at Intel India. He leads Intel’s engagement with India’s largest enterprises on their digital transformation and AI adoption journeys.

He has over 28 years of global experience in sales and technology leadership and orchestrates a broad partner ecosystem. These include system integrators, ISVs, cloud providers to deliver Intel -based solutions spanning cloud, AI, HPC, 5G, edge, and end -user computing. So in summary, he sees firsthand what drives enterprise infrastructure decisions and what actually prevents companies from moving AI from pilot to true scale. So with those introductions, let’s get right into it. Vivek, why don’t I start with you? So here’s my first question Vivek. CDAC has built param supercomputing series and AI compute infrastructure. What compute capabilities does CDAC actually provide today? Who uses them? For what workloads? And what are the key constraints that you see operating within?

Dr. Vivek Khaneja

Okay, thanks. So as you know, CDAC is a scientific society under the Ministry of Electronics and IT. And one of the mandates that we have is to build supercomputing capacity in the country. This is the mandate which is given to us under the National Supercomputing Mission where we have developed a series of supercomputers under the brand name PARAM. We started in late 80s, starting with the PARAM 8000. And now we have the PARAM series of supercomputers which are installed with our own software. About 48 petaflops of supercomputers are installed in the country. And we have been able to build some of these supercomputers And we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers of overall performance connected over the National Knowledge Network or the NKN as it is called.

This capacity is going to be augmented to about 100 pitaflops by the end of this year with 60 installations. Most of these installations are today being used by researchers. About 15 ,000 researchers fire jobs across these machines on the National NKN. A lot of it is being also used by MSMEs. For example, we have opened Paramuthkarsh which is housed in our city at Bangalore Centre for use by start -ups and MSMEs. The kind of applications that run here include drug discovery, bioinformatics, we have protein folding. We have molecular modeling. You have weather prediction. So I mean almost all of these are being used. number crunching problems, oil exploration, finite element modeling, computational fluid dynamic problems.

So all such problems are being run across these clusters by researchers. We also have a lot of expertise in various domains which we have developed in -house over a period of years where we are hand -holding a lot of these agencies, a lot of government agencies are working with us on this.

Aman Khanna

Thanks so much, Vivek. Lots of threads to pull on there, but for a moment, let me go to Nitin. Nitin, you work with some of India’s largest enterprises, some of our national champions. So when these enterprises commit to AI, which you must have seen increasingly so, what are some of the actual barriers that prevent them to move from pilot projects to some of those production -scale deployments that everyone envisions, especially at events like this?

Nitin Bajaj

Thank you. First of all, thank you for inviting me. It’s a privilege to be here and talking in front of such an esteemed crowd. So basically, I think I’ll kind of break this into two, three pieces. One, be it Indian enterprises or be it global enterprises, I think everybody is grappling with the same sort of problem statement. And everybody is trying to find those use cases which are very, very pertinent for their own enterprises. And some certain enterprises are into manufacturing domain, some are there in the retail side of it. And so the typical use cases that they have could be around smart manufacturing, smart retail, or specific generic use cases where they want to do a lot of document search.

They have to have fine -tuned search on the policies, the TNCs and things of that sort. So essentially, the way you see it, I see it in twofold. One, They are looking at the speed, and everybody is trying to figure out the best ROI. So I think the biggest gap today is what to use, whether to use it on -prem or whether to go on cloud, use open APIs which are available to them. Then once those use cases are ready, as you said, from pilot to production, what is the final cost of that deployment? And then the third angle that comes in is whether to kind of centralize all of this or to take it to the edge.

So there is no single answer. And when you think of all the ecosystem providers that are there today, be it from silicon to ISVs to system integrators, everybody has a pocket of expertise in their own sense. But today there is no single formula. And the entire. AI journey is changing so rapidly models are being dropped at a speed of light and then the entire from silicon to OS to everything else, the whole ecosystem is changing so fast that even the enterprises are trying to figure out what is the best deployment model for them, what is the best ROI that they can get out of it but in the midst of all of this, in pockets, lot of these enterprises are trying to see what are the specific use cases that they can bring to fore that can bring some incremental benefit to whatever operations that they are running in their organization.

Now I can give multiple examples to it for example in a manufacturing domain, there could be surveillance kind of use cases multi -modal use cases, there could be use cases around how to look at inventory and then how to look at complete digital twin dark factory kind of a scenario in case of retail it is all about say it could be around preventing the thefts, the pilferation that happens or doing customer analytics. And then, as I said, a lot of document search kind of examples are going on. But in most of those cases, the whole decision -making is between edge versus on -prem, cloud versus sovereign data centers, what kind of models they should use, and how to find out the right ROI, which is where we feel frugal AI is what we kind of propose to the industry.

And I’ll talk more about that maybe later. But what is the best deployment model that can really help an enterprise scale at a level, at a cost point, which is really making sense to them?

Aman Khanna

Let me ask you a very quick follow -on question on that. Do you see that Indian enterprises are increasingly sophisticated in making these choices? And, of course, Intel works globally, right? So how does it compare in terms of its maturity, its sophistication, compared to the other markets where Intel also operates?

Nitin Bajaj

Clearly, in terms of use cases, I think we have the edge. But again, the veracity of data. it’s a big problem second i would say that these enterprises when you think of it a year back almost everybody wanted to buy gpus and a lot of these enterprises have bought systems which are powerful enough to run all kinds of use cases but still they are in that pilot phase because again because of that roi factor so things are maturing of course enterprises are becoming smarter from llms we are now looking at slms so they are trying to kind of figure out the right silicon where they have to kind of land their workloads on so as things are emerging i think things are getting better but yeah it’s still some time before you see those live deployments coming out

Dr. Vivek Khaneja

for the benefit of large larger audience sorry for interjecting i think just just to add to add to what he just said we are seeing a lot of places where people are not being able to come out of the POCs. I think one of the major reasons that, at least personally I have seen, is that people are very happy with the POCs. They can train them on curated data sets. But once it actually goes and hits real -life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.

And as you talked about the ROI, then you have to make those choices, whether I should have an on -prem this thing, do I really need a GPU for my problem? Or can I work across multiple VMs on a simple, simple IT infrastructure? So those are the choices that hopefully in the next coming… years, intelligent choices will be made. People, we will start to see more MLOps, engineers coming out and deploying these things at scale because POCs is fine, but the real revenue will come only once you actually deploy it at scale.

Aman Khanna

Understood. Thank you so much for those perspectives. Unlike yourselves, you know, you’re both technologists. I’m a little bit in the policy space. I’m in the fringes of this. I work on tech policy, and I haven’t had a single conversation here with, you know, a foreign investor which hasn’t talked about sovereignty or dependency. So I’m going to bring my next question. I’ll bring you over to my area because I’d love to pick both your brains on this. So Vivek, let me talk to you about this first, right? So India talks about sovereignty, but SEDAC still relies on global technology, right, whether that’s chips, systems, you know, software stacks. So realistically, what can India… India built domestically versus what will we always need to source globally, right?

and where should we focus our capability? So question to you as someone who’s really on the cutting edge.

Dr. Vivek Khaneja

Okay, so let me answer that both technically and politically correct way. See, when you talk of sovereignty, let’s see what does it really mean. Do you want to be completely independent in the entire vertical, right from silicon up to the application? Is that really possible, right? Let’s say I need to design a GPU. It’s a good aspirational goal. But do I have the wherewithal today to do the entire thing in -house? Probably not. I don’t have the IPs. We can start, definitely. We don’t even have today a fab that can give me a 3 nanometer or below production capabilities and package capabilities. So I think a more pragmatic approach is to have maybe the silicon coming from outside and everything.

Everything above that should be under my control. So you should be able to control all your… critical choke points. So that’s the model that India AI has taken. For example, they have created a farm of GPUs which are available under the control freely or at a reasonable cost for the developers. The models which are being built on top of that are under your control. How those models are going to be orchestrated is under your control. The application that will use those models is under your control. So for me, that is the sovereignty where you are getting the maximum ROI. Should I really be competing against an H100 or a B200 from NVIDIA? Probably not in the short term.

But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will probably have something by the end of 2029 -30. But then, till that time, we really need to have a lot of this entire stack under sovereign control. maybe using chips from outside, whether it is NVIDIA, whether it is Sapphire Rapid or Granite Rapid from Intel, or from AMD. But everything above that should be, all my critical choke points should be under my control.

Aman Khanna

So I have to compliment you. That was a very candid and pragmatic answer. So compliments to you. Nitin, can I reframe that question a little bit for you? So, you know, there’s, as you know, in the same vein, there’s a significant policy focus on data sovereignty and data localization as well, right? So when your enterprise customers are making AI infrastructure decisions, how much do these factor into those choices? And how do they compare with cost and performance considerations as well? So has this calculus shifted with some of the policy developments that we’ve seen over the past couple of years in AI?

Nitin Bajaj

Well, I would say not really. So it all depends on the industry and the market. So it all depends on the industry that enterprise is in. for a banking industry, for healthcare industry, data sovereignty is very, very important. For a manufacturing industry or for any other, say, retail or any other industry, they are trying to build use cases on the cloud because that’s where they feel that they can build those use cases very, very quickly. They can simply call the APIs available there and then they see that the performance is much better and they get better accuracy. So it’s a mix of both. Now, then comes again, even in a manufacturing environment, as I was calling out, in an OT environment, a lot of these manufacturing firms would want their data to reside within their perimeter because it is so, so close to them.

Then again, when it is coming to deployment side of it, they’re again looking at an edge deployment which becomes within the perimeter. So it’s a mix of both. Now, finally, again to the point that I was making in the beginning anything that has to scale the price becomes a key driver so it is about okay I have a model that I have kind of fine tuned on the cloud now when I have to take it to deployment can I use something locally which is available today with me without making a lot of investment and then can I scale it which is where when Intel is talking about it we basically kind of focus on frugal AI today the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model so the typical requirements that you will have on the edge are very well suited on this CPU itself and when it comes to a data center side the Xeon 6 processors today are able to run a 20 billion parameter very easily.

They can go up to 80 billion parameters depending on the specific use case. So do I need a GPU in every instance is the first question that we are trying to ask. And today everybody has a CPU in their environment. So can they reutilize that? And can they test it out? Can they look at performance levels? One example that I can kind of discuss with my customers is if you’re looking at, say, a prompt -based engine where you need to do a document search, typically a human eye can read about 10 prompts in a second if somebody is a fast reader. Now if a processor can give you 15 to 20 prompts, is this good enough as a performance for you or do you want 200 prompts in that particular second?

Maybe 15, 20 prompts is good enough. So that’s where the cost versus performance comes in. That’s where one has to be very, very… calculated in terms of what is their end use that they are looking at and what would best suffice that particular usage so again it’s a mix of uh this uh localized uh data versus what can go on to cloud and once you look at uh the scale of it again you have to look at the cost because when you’re scaling it on the cloud the cost may be very very different from what you can get on the on -prem side of it i’m not a kind of proponent of on -prem versus cloud for me i think both of them bring their advantages but what i’m trying to say is the customer has to really look at what is their end use and at what cost they want to

Aman Khanna

understood so we need to diversify our approaches and have a product to mission fit which is absolutely critical

Nitin Bajaj

think of it in the past everybody used to think only mainframes can solve the problem today the applications are not only running are no more running mainframe, they came on to CPU level things and then today we are looking at micro services. So everything has kind of evolved over time and same things are happening on the AI side of it today.

Aman Khanna

Understood. So one quick question that’s on everyone’s mind and to both of you. From each of your respective perspectives, whether that’s government R &D infrastructure or enterprise deployment, is talent and the capability gap still a critical choke point from each of your perspectives? So Vivek, why don’t I start with you on this?

Dr. Vivek Khaneja

It is. It’s unfortunate but yes it is. I mean we do find that we have a set of very bright engineers coming out but most of them are trained in good theoretical understanding of what machine learning is. They are good at mathematics, basic understanding but when it comes to actual deployments on field. I think that’s where we are lacking. Maybe we need to have a serious look at our curriculum in the colleges to see how do I train large models, how do I deploy large models using MLOps, because today, as I said, most of these kids are working on curated data sets. They are working on some standard test cases and test and validation cases.

But when it comes to real life, life is not that rosy. I mean, you have data which is missing, you have data which is skewed, you have data which needs to be cleaned, I have real time constraints, I have other security considerations. So those are not a part of the curriculum. So that’s where I think some capstone projects where you are able to handle beta scale data needs to be put in place. Theory is fine, but still there’s a lot to learn on the practical side.

Nitin Bajaj

I’ll give a different perspective. The way I look at India versus all other countries. in other countries it’s an aging population for us it is a booming population at this point in time which with average age of 13 to 25 or so which is kind of exposed to AI so maybe today we may see some sort of gap in terms of AI capabilities but 2 years down the line or 4 years down the line I think that we bridged very very quickly so we have that benefit of demography here in the short run of course as an individual myself I am also learning from the kids today how AI can be deployed so gaps will be there but I think this is a learning curve for everybody

Aman Khanna

Thanks Nitin I want to get to just a couple more questions and then we’ll try and be very quick one I’ve been wanting to ask you and one that I get asked often is the energy and sustainability question you know super computing uses huge amounts of energy can have societal impacts right CDACs super computing whether the you know large data centers that Intel perhaps you know works with So how do we think about energy and sustainability implications, especially in the Indian context? So Vivek, perhaps, with you.

Dr. Vivek Khaneja

So from my perspective, I look at it to be addressed at two levels. One is something that we have been doing. Coming from a VLSI background, I can tell you that there are today’s standard techniques which are being used in all associate designs which are power -aware designs. So you have multiple power islands, you have clock tree gating that happens, you switch off those cores which are not being used. So that’s from a design perspective. But when it comes to the platform design, there are smart choices which are being made. For example, if I talk about CDAC solutions, today we are using liquid cooling as well as water cooling in a ratio of almost 70 to 30.

And we are slowly moving the entire thing to a pure liquid pool thing. There are other advanced techniques which use only air cooling. So, ultimately what is your PUE will actually determine. So, typically we see a PUE of around 1 .2 or so as compared to a conventional water -cooled thing which is about 1 .4, 1 .5. There are definitely green norms which are being proposed. I think the question here to be asked and we need to do this kind of a benchmarking across all the models is what is the energy that I spend per token for training or for inferencing. I think there should be critical benchmark. We need to seriously look at how I can optimize my models to be more power aware.

Can I have compressed models which take less energy? So, yes, energy is one of the critical factors and especially hyperscalers would need huge amount of power. I mean, there is this joke that we keep on saying that *** But along with the hyperscaler, you also need to have a small power plant which needs to be designed together with it.

Aman Khanna

Thank you. Nitin, to you.

Nitin Bajaj

I’ll have two, three points here. One, from manufacturing point of view, Intel is utilizing the latest technologies like ribbon fed and power via, which improves the power efficiency by 15%. So these are the latest technologies available. Second, I would say that we are running our own data centers. Which are running at a PUE of 1 .06, which is the most efficient power data center PUE that you could see. So there’s a white paper on intel .com. I would appreciate if those interested can look at it. Third, I would say, again, power is a problem. Or is the first and the foremost ingredient to running those data centers. So one has to be, again, very cautious of what kind of models you are running and then where it is landing.

So if you. If you’re able to be, if you’re more judicious in terms of our selection. then of course we can save power in some ways.

Aman Khanna

So one final question. I realize that we’re time’s up. One question, let’s put this in a quick sentence if you can. When we assess India’s progress over the next three, five years, what does success look like to each of you? So maybe a sentence.

Dr. Vivek Khaneja

Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us.

Aman Khanna

Thank you.

Nitin Bajaj

For me it is like when we were like about 150 in terms of data usage. Today we are number one in terms of data usage. When it comes to increasing the generic intelligence of people, so today we are using media for media and for entertainment, all the data that we are consuming. If we can consume the data for improving or improving the general intelligence of public. that will make a large scale impact on the society and India at large. So in two or three years, if a Sabziwala can figure out how they can kind of up -level their state, that would be the best way. And then all the Indic models and all the other use cases that are coming out should be able to support those.

And then there will be a mass -scale deployment of AI across the board.

Aman Khanna

Thanks so much, Nitin. And I see that time’s up. I wish we could pick your brain further. This has been a truly fascinating conversation. And thank you for being so candid with all your responses. Thank you. So please join me in thanking both Vivek and Nitin. Thank you. Thank you. I would now like to invite Sangeeta Reddy, Joint Managing Director, Apollo Hospitals, to give her remarks. Thank you.

D

Dr. Vivek Khaneja

Speech speed

157 words per minute

Speech length

1448 words

Speech time

551 seconds

Supercomputing capacity & usage overview

Explanation

Vivek outlines India’s national supercomputing capability, noting the installed petaflop capacity and the large number of researchers leveraging these machines via the National Knowledge Network. This demonstrates the scale of AI compute infrastructure available for government and research missions.


Evidence

“About 48 petaflops of supercomputers are installed in the country.” [13]. “About 15 ,000 researchers fire jobs across these machines on the National NKN.” [5].


Major discussion point

National AI Compute Infrastructure (CDAC) – Supercomputing capacity & usage overview


Topics

Artificial intelligence | Enabling environment for digital development


Need for MLOps expertise and real‑world scaling challenges

Explanation

Vivek stresses that while proof‑of‑concepts are easy, real revenue comes from deploying models at scale, which requires MLOps skills and experience handling messy data. He calls for curriculum changes to train engineers in these practical aspects.


Evidence

“People, we will start to see more MLOps, engineers coming out and deploying these things at scale because POCs is fine, but the real revenue will come only once you actually deploy it at scale.” [28]. “But once it actually goes and hits real‑life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.” [29]. “Maybe we need to have a serious look at our curriculum in the colleges to see how do I train large models, how do I deploy large models using MLOps, because today, as I said, most of these kids are working on curated data sets.” [30].


Major discussion point

National AI Compute Infrastructure (CDAC) – Need for MLOps expertise and real‑world scaling challenges


Topics

Artificial intelligence | Capacity development


POC success vs production failure due to data quality and operational gaps

Explanation

Vivek points out that enterprises are often satisfied with pilot projects, but moving to production fails because of missing, skewed, or unclean data and lack of real‑time constraints handling. These operational gaps hinder AI adoption beyond the lab.


Evidence

“I think one of the major reasons that, at least personally I have seen, is that people are very happy with the POCs.” [58]. “I mean, you have data which is missing, you have data which is skewed, you have data which needs to be cleaned, I have real time constraints, I have other security considerations.” [51].


Major discussion point

Enterprise AI Adoption Barriers – POC success vs production failure due to data quality and operational gaps


Topics

Artificial intelligence | Capacity development


Pragmatic sovereignty: import silicon, retain control over models, orchestration, applications

Explanation

Vivek argues for a realistic approach to sovereignty where India may import silicon but keep full control over model orchestration and application layers, ensuring strategic independence without requiring end‑to‑end chip design.


Evidence

“So I think a more pragmatic approach is to have maybe the silicon coming from outside and everything.” [61]. “How those models are going to be orchestrated is under your control.” [62]. “The application that will use those models is under your control.” [63]. “Do you want to be completely independent in the entire vertical, right from silicon up to the application?” [65]. “See, when you talk of sovereignty, let’s see what does it really mean.” [68].


Major discussion point

Sovereignty vs Global Technology Dependence – Pragmatic sovereignty


Topics

Data governance | Artificial intelligence | Enabling environment for digital development


Talent gap – theory‑heavy graduates, need curriculum overhaul

Explanation

Vivek observes that Indian graduates excel in mathematics and theory but lack hands‑on experience deploying models, cleaning data, and using MLOps. He calls for curriculum reforms to bridge this gap.


Evidence

“They are good at mathematics, basic understanding but when it comes to actual deployments on field.” [75]. “I mean we do find that we have a set of very bright engineers coming out but most of them are trained in good theoretical understanding of what machine learning is.” [76]. “Theory is fine, but still there’s a lot to learn on the practical side.” [39]. “Maybe we need to have a serious look at our curriculum in the colleges…” [30].


Major discussion point

Talent and Capability Gap – Graduates strong in theory but weak in practical deployment, MLOps, data handling; curriculum overhaul needed


Topics

Capacity development | Artificial intelligence


Energy and sustainability – power‑aware design, liquid cooling, low PUE, benchmark per‑token energy

Explanation

Vivek highlights design choices that improve energy efficiency, such as liquid cooling and power‑aware circuits, achieving a PUE around 1.2 versus traditional 1.4‑1.5. He also calls for benchmarking energy per token to guide model selection.


Evidence

“So, typically we see a PUE of around 1 .2 or so as compared to a conventional water -cooled thing which is about 1 .4, 1 .5.” [87]. “For example, if I talk about CDAC solutions, today we are using liquid cooling as well as water cooling in a ratio of almost 70 to 30.” [89]. “Coming from a VLSI background, I can tell you that there are today’s standard techniques which are being used in all associate designs which are power -aware designs.” [91]. “I think the question here to be asked and we need to do this kind of a benchmarking across all the models is what is the energy that I spend per token for training or for inferencing.” [88].


Major discussion point

Energy and Sustainability of AI Infrastructure – Power‑aware design, liquid cooling, low PUE, benchmark energy per token


Topics

Environmental impacts | Artificial intelligence


Vision of success – AI deployed in everyday workflows

Explanation

Vivek envisions AI becoming embedded in numerous workflows, making daily life simpler and more enjoyable for citizens, marking a tangible success metric for India’s AI journey.


Evidence

“Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us.” [55].


Major discussion point

Vision of Success for India’s AI (3‑5 year horizon) – Widespread AI deployment in workflows that simplify daily life


Topics

Artificial intelligence | Social and economic development


N

Nitin Bajaj

Speech speed

164 words per minute

Speech length

1874 words

Speech time

685 seconds

ROI and deployment model – no single formula

Explanation

Nitin notes that enterprises struggle to decide between on‑prem, cloud, edge, or sovereign data centers, and that there is currently no universal formula to determine the optimal ROI for AI deployments.


Evidence

“But today there is no single formula.” [44]. “So I think the biggest gap today is what to use, whether to use it on‑prem or whether to go on cloud, use open APIs which are available to them.” [41]. “But in most of those cases, the whole decision‑making is between edge versus on‑prem, cloud versus sovereign data centers, what kind of models they should use, and how to find out the right ROI, which is where we feel frugal AI is what we kind of propose to the industry.” [42]. “it’s a big problem … still they are in that pilot phase because again because of that roi factor” [35].


Major discussion point

Enterprise AI Adoption Barriers – ROI, deployment model (on‑prem vs cloud vs edge) and lack of a single formula


Topics

Artificial intelligence | Enabling environment for digital development


Industry‑specific data sovereignty – cost/performance drivers

Explanation

Nitin emphasizes that for sectors such as banking and healthcare, data sovereignty is critical, but decisions still hinge on balancing cost against performance when choosing models and infrastructure.


Evidence

“for a banking industry, for healthcare industry, data sovereignty is very, very important.” [69]. “So that’s where the cost versus performance comes in.” [49].


Major discussion point

Sovereignty vs Global Technology Dependence – Industry‑specific data‑sovereignty importance; cost/performance still primary drivers


Topics

Data governance | Artificial intelligence | Enabling environment for digital development


Demographic advantage will close skill gaps quickly

Explanation

Nitin points out India’s young, tech‑savvy population will help bridge current AI capability gaps within a few years, turning the demographic dividend into a rapid skill‑building engine.


Evidence

“in other countries it’s an aging population for us it is a booming population at this point in time which with average age of 13 to 25 or so which is kind of exposed to AI so maybe today we may see some sort of gap in terms of AI capabilities but 2 years down the line or 4 years down the line I think that we bridged very very quickly so we have that benefit of demography here in the short run…” [80].


Major discussion point

Talent and Capability Gap – Demographic advantage will rapidly close skill gaps; learning curve expected to be short


Topics

Capacity development | Social and economic development


Intel high‑efficiency data centers and frugal AI hardware

Explanation

Nitin highlights Intel’s ultra‑efficient data centers (PUE 1.06) and power‑saving technologies, positioning them as the hardware foundation for cost‑effective, scalable AI deployments.


Evidence

“Which are running at a PUE of 1 .06, which is the most efficient power data center PUE that you could see.” [52]. “One, from manufacturing point of view, Intel is utilizing the latest technologies like ribbon fed and power via, which improves the power efficiency by 15%.” [92]. “Now, finally, again to the point that I was making in the beginning anything that has to scale the price becomes a key driver … frugal AI today the Intel core and ultra core CPUs … allow you enough capability to run maybe a 7 or 8 billion parameter model …” [24].


Major discussion point

Energy and Sustainability of AI Infrastructure – Intel’s high‑efficiency data centers (PUE 1.06), frugal AI hardware, careful model selection to save power


Topics

Environmental impacts | Artificial intelligence | Enabling environment for digital development


Vision of mass‑scale AI impact

Explanation

Nitin envisions AI being deployed at massive scale across industries, driving societal transformation from data consumption to everyday services such as local vendors, powered by Indic models.


Evidence

“And then there will be a mass -scale deployment of AI across the board.” [6]. “that will make a large scale impact on the society and India at large.” [86]. “And then all the Indic models and all the other use cases that are coming out should be able to support those.” [111].


Major discussion point

Vision of Success for India’s AI (3‑5 year horizon) – Mass‑scale AI impact from data consumption to everyday vendors (e.g., Sabziwala) enabled by Indic models


Topics

Artificial intelligence | Social and economic development


A

Aman Khanna

Speech speed

152 words per minute

Speech length

1167 words

Speech time

457 seconds

CDAC’s role in national AI compute infrastructure

Explanation

Aman describes CDAC’s contribution of the PARAM supercomputing series, which provides AI compute resources for government departments and national missions, underscoring the strategic infrastructure investment.


Evidence

“CDAC has built param supercomputing series and AI compute infrastructure.” [1]. “And CDAC has built the Parham supercomputing series providing AI compute infrastructure for government departments and national missions.” [2]. “Significant infrastructure investments.” [9].


Major discussion point

National AI Compute Infrastructure (CDAC) – Supercomputing capacity & usage overview


Topics

Artificial intelligence | Enabling environment for digital development


Energy and sustainability concerns for large‑scale AI compute

Explanation

Aman raises the question of how the massive energy consumption of supercomputing and large data centers impacts sustainability in India, prompting discussion on power‑aware designs and efficient operations.


Evidence

“…the energy and sustainability question you know super computing uses huge amounts of energy can have societal impacts right CDACs super computing whether the you know large data centers that Intel perhaps you know works with So how do we think about energy and sustainability implications, especially in the Indian context?” [17].


Major discussion point

Energy and Sustainability of AI Infrastructure – Power‑aware design, liquid cooling, low PUE, benchmark energy per token


Topics

Environmental impacts | Artificial intelligence


Agreements

Agreement points

Moving from pilot projects to production scale is a major challenge for AI deployment

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Companies struggle to move from proof-of-concepts to production due to real-world data challenges, lack of MLOps expertise, and infrastructure decisions between on-premise, cloud, and edge deployments


Indian enterprises have strong use cases and data advantages, but many remain stuck in pilot phases due to ROI concerns and deployment challenges


Summary

Both speakers acknowledge that while organizations can successfully create AI proof-of-concepts, they face significant barriers when attempting to scale these solutions to production environments, whether due to technical challenges with real-world data or ROI concerns


Topics

Artificial intelligence | The digital economy | The enabling environment for digital development


Energy efficiency and sustainability are critical considerations in AI infrastructure

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

CDAC addresses power consumption through hardware design techniques like power islands and clock gating, plus infrastructure improvements using liquid cooling to achieve better power usage effectiveness


Intel focuses on manufacturing efficiency improvements and optimized data center operations, emphasizing judicious model selection to minimize power requirements


Summary

Both speakers recognize energy consumption as a major concern in AI deployment and describe their organizations’ technical approaches to improving power efficiency through hardware design, cooling systems, and operational optimization


Topics

Environmental impacts | Artificial intelligence


Practical deployment success requires matching appropriate technology solutions to specific use case requirements

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

India should focus on controlling critical choke points above the silicon level rather than competing in chip manufacturing, maintaining sovereignty through software stacks and applications while sourcing hardware globally


The key barrier is determining optimal ROI through “frugal AI” approaches that match performance requirements to appropriate hardware capabilities


Summary

Both speakers advocate for pragmatic approaches that focus on optimal resource utilization rather than pursuing maximum performance – whether in terms of technological sovereignty or hardware selection for enterprise deployments


Topics

Artificial intelligence | The enabling environment for digital development | The digital economy


Success should be measured by real-world impact and widespread adoption rather than just technological achievements

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj
– Aman Khanna

Arguments

Success means AI being deployed in actual workflows to make life simpler and more enjoyable for users


True success will be mass-scale AI deployment that improves general intelligence and helps even small vendors like vegetable sellers upgrade their businesses


Success in India’s AI progress should be measured by practical outcomes and real-world impact rather than just technological achievements or policy announcements


Summary

All three speakers emphasize that true success in AI development should be evaluated based on practical benefits to users and society, with widespread adoption and real-world problem-solving taking precedence over technical milestones or policy announcements


Topics

Social and economic development | Artificial intelligence | Closing all digital divides


Similar viewpoints

While acknowledging current skills gaps in AI deployment, both speakers are optimistic about India’s ability to develop necessary capabilities, though they emphasize different aspects – Khaneja focuses on curriculum improvements while Bajaj highlights demographic advantages

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Current engineering graduates have strong theoretical foundations but lack practical deployment skills for real-world scenarios with messy data and production constraints


India’s young demographic advantage will quickly bridge current AI capability gaps within 2-4 years as the population learns and adapts


Topics

Capacity development | Social and economic development


Both speakers take nuanced, industry-specific approaches to sovereignty concerns, recognizing that different sectors have different requirements and that pragmatic solutions often involve hybrid approaches rather than absolute positions

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

India should focus on controlling critical choke points above the silicon level rather than competing in chip manufacturing, maintaining sovereignty through software stacks and applications while sourcing hardware globally


Data sovereignty requirements vary by industry – critical for banking and healthcare, less so for manufacturing and retail where cloud deployment offers better performance and faster development


Topics

Data governance | The enabling environment for digital development | Building confidence and security in the use of ICTs


Unexpected consensus

Pragmatic approach to technological sovereignty over absolute independence

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

India should focus on controlling critical choke points above the silicon level rather than competing in chip manufacturing, maintaining sovereignty through software stacks and applications while sourcing hardware globally


Data sovereignty requirements vary by industry – critical for banking and healthcare, less so for manufacturing and retail where cloud deployment offers better performance and faster development


Explanation

Despite representing government R&D and private enterprise perspectives, both speakers converge on practical rather than ideological approaches to sovereignty, acknowledging global interdependence while focusing on strategic control points


Topics

The enabling environment for digital development | Data governance | Artificial intelligence


Focus on inclusive, grassroots AI adoption over elite technological advancement

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Success means AI being deployed in actual workflows to make life simpler and more enjoyable for users


True success will be mass-scale AI deployment that improves general intelligence and helps even small vendors like vegetable sellers upgrade their businesses


Explanation

Both speakers, despite their different institutional backgrounds, emphasize democratized AI benefits reaching small-scale entrepreneurs and everyday users rather than focusing primarily on high-tech achievements or enterprise applications


Topics

Social and economic development | Closing all digital divides | Artificial intelligence


Overall assessment

Summary

The speakers demonstrate strong consensus on practical challenges in AI deployment, the importance of energy efficiency, pragmatic approaches to technology sovereignty, and the need for inclusive success metrics focused on real-world impact


Consensus level

High level of consensus with complementary perspectives rather than conflicting viewpoints. The government R&D and enterprise viewpoints align on fundamental challenges and solutions, suggesting good coordination between policy infrastructure development and market needs. This consensus indicates a mature understanding of AI deployment realities and bodes well for coordinated progress in India’s AI ecosystem.


Differences

Different viewpoints

Approach to talent and capability gaps in AI

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Current engineering graduates have strong theoretical foundations but lack practical deployment skills for real-world scenarios with messy data and production constraints


India’s young demographic advantage will quickly bridge current AI capability gaps within 2-4 years as the population learns and adapts


Summary

Dr. Khaneja sees current talent gaps as a significant structural problem requiring curriculum changes and practical training, while Nitin Bajaj views it as a temporary issue that will be naturally resolved by India’s young demographic advantage


Topics

Capacity development | Social and economic development


Data sovereignty requirements and deployment preferences

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

India should focus on controlling critical choke points above the silicon level rather than competing in chip manufacturing, maintaining sovereignty through software stacks and applications while sourcing hardware globally


Data sovereignty requirements vary by industry – critical for banking and healthcare, less so for manufacturing and retail where cloud deployment offers better performance and faster development


Summary

Dr. Khaneja advocates for a uniform approach to sovereignty focusing on software control across all sectors, while Nitin Bajaj argues for industry-specific approaches where some sectors can prioritize cloud performance over sovereignty


Topics

Data governance | The enabling environment for digital development | Building confidence and security in the use of ICTs


Unexpected differences

Timeline and urgency of addressing AI talent gaps

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Current engineering graduates have strong theoretical foundations but lack practical deployment skills for real-world scenarios with messy data and production constraints


India’s young demographic advantage will quickly bridge current AI capability gaps within 2-4 years as the population learns and adapts


Explanation

This disagreement is unexpected because both speakers work closely with the Indian tech ecosystem and might be expected to have similar assessments of talent readiness. Dr. Khaneja’s pessimistic view contrasts sharply with Nitin Bajaj’s optimistic demographic-based projection, suggesting fundamentally different perspectives on how quickly practical AI skills can be developed


Topics

Capacity development | Social and economic development


Overall assessment

Summary

The speakers show moderate disagreement on key strategic approaches to AI development in India, particularly around talent development timelines, sovereignty implementation strategies, and the urgency of addressing current capability gaps


Disagreement level

The disagreements are substantive but not fundamental – both speakers share similar goals for India’s AI success but differ on implementation approaches, timelines, and priorities. These differences reflect their distinct organizational perspectives (government R&D institution vs. private enterprise) and could lead to misaligned strategies if not reconciled through policy coordination


Partial agreements

Partial agreements

Both speakers agree that enterprises face significant barriers in scaling AI from pilots to production, but they emphasize different root causes – Dr. Khaneja focuses on technical deployment challenges and data quality issues, while Nitin Bajaj emphasizes ROI concerns and decision complexity

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Companies struggle to move from proof-of-concepts to production due to real-world data challenges, lack of MLOps expertise, and infrastructure decisions between on-premise, cloud, and edge deployments


Indian enterprises have strong use cases and data advantages, but many remain stuck in pilot phases due to ROI concerns and deployment challenges


Topics

Artificial intelligence | The digital economy | The enabling environment for digital development


Both speakers agree that energy efficiency is critical for AI infrastructure, but they propose different solutions – Dr. Khaneja focuses on hardware design and cooling infrastructure improvements, while Nitin Bajaj emphasizes manufacturing technology advances and careful model selection

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

CDAC addresses power consumption through hardware design techniques like power islands and clock gating, plus infrastructure improvements using liquid cooling to achieve better power usage effectiveness


Intel focuses on manufacturing efficiency improvements and optimized data center operations, emphasizing judicious model selection to minimize power requirements


Topics

Environmental impacts | Artificial intelligence


Both speakers agree that success should be measured by real-world deployment and user benefit, but they have different scopes – Dr. Khaneja focuses on workflow integration and user experience, while Nitin Bajaj emphasizes mass democratization and socioeconomic empowerment

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Success means AI being deployed in actual workflows to make life simpler and more enjoyable for users


True success will be mass-scale AI deployment that improves general intelligence and helps even small vendors like vegetable sellers upgrade their businesses


Topics

Social and economic development | Artificial intelligence | Closing all digital divides


Similar viewpoints

While acknowledging current skills gaps in AI deployment, both speakers are optimistic about India’s ability to develop necessary capabilities, though they emphasize different aspects – Khaneja focuses on curriculum improvements while Bajaj highlights demographic advantages

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

Current engineering graduates have strong theoretical foundations but lack practical deployment skills for real-world scenarios with messy data and production constraints


India’s young demographic advantage will quickly bridge current AI capability gaps within 2-4 years as the population learns and adapts


Topics

Capacity development | Social and economic development


Both speakers take nuanced, industry-specific approaches to sovereignty concerns, recognizing that different sectors have different requirements and that pragmatic solutions often involve hybrid approaches rather than absolute positions

Speakers

– Dr. Vivek Khaneja
– Nitin Bajaj

Arguments

India should focus on controlling critical choke points above the silicon level rather than competing in chip manufacturing, maintaining sovereignty through software stacks and applications while sourcing hardware globally


Data sovereignty requirements vary by industry – critical for banking and healthcare, less so for manufacturing and retail where cloud deployment offers better performance and faster development


Topics

Data governance | The enabling environment for digital development | Building confidence and security in the use of ICTs


Takeaways

Key takeaways

India should adopt a pragmatic approach to AI sovereignty by controlling critical software layers (models, orchestration, applications) while sourcing hardware globally, rather than attempting complete vertical integration


The primary barrier to enterprise AI adoption is moving from successful pilots to production scale, hampered by real-world data challenges, MLOps expertise gaps, and ROI uncertainty


A ‘frugal AI’ approach is needed where enterprises match their performance requirements to appropriate hardware capabilities rather than defaulting to expensive GPU solutions


Data sovereignty requirements vary significantly by industry – critical for banking/healthcare but less important for manufacturing/retail where cloud deployment offers advantages


India’s young demographic will be a key advantage in bridging current AI talent gaps within 2-4 years


Energy efficiency must be addressed through both hardware design improvements and judicious model selection to ensure sustainable AI deployment


Resolutions and action items

CDAC is developing its own RISC-V based GPGPU with target completion by 2029-30


CDAC plans to expand supercomputing capacity from 48 petaflops to 100 petaflops by end of year with 60 installations


Need for curriculum reform in colleges to include practical MLOps training and real-world deployment scenarios beyond theoretical knowledge


Establishment of benchmarking standards for energy consumption per token for training and inference across AI models


Unresolved issues

How to effectively bridge the gap between government AI infrastructure capabilities and enterprise deployment needs


Specific mechanisms for translating policy announcements and infrastructure investments into actual scaled adoption


Clear guidelines for enterprises on optimal deployment models (on-premise vs cloud vs edge) based on use case requirements


Standardized approaches for measuring and comparing ROI across different AI deployment scenarios


Detailed roadmap for developing domestic chip manufacturing capabilities while maintaining current pragmatic approach


Suggested compromises

Focus sovereignty efforts on software stack control while accepting dependence on global hardware suppliers in the short-to-medium term


Implement hybrid deployment strategies where data sovereignty requirements are balanced with performance and cost considerations based on industry needs


Adopt a mixed approach using existing CPU infrastructure for appropriate workloads while selectively deploying GPUs only where performance requirements justify the cost


Combine theoretical AI education with mandatory practical capstone projects involving real-world data challenges


Thought provoking comments

Should I really be competing against an H100 or a B200 from NVIDIA? Probably not in the short term. But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC-V. We will probably have something by the end of 2029-30. But then, till that time, we really need to have a lot of this entire stack under sovereign control.

Speaker

Dr. Vivek Khaneja


Reason

This comment is remarkably candid and pragmatic about India’s technological sovereignty ambitions. Instead of offering typical aspirational rhetoric, Khaneja provides a realistic timeline and acknowledges current limitations while outlining a practical approach to sovereignty – controlling the software stack while sourcing hardware globally.


Impact

This shifted the conversation from abstract policy discussions about sovereignty to concrete technical realities. It established a framework for thinking about sovereignty in layers rather than absolute terms, influencing how subsequent questions about data sovereignty and enterprise decisions were framed and answered.


I think one of the major reasons that, at least personally I have seen, is that people are not being able to come out of the POCs. I think one of the major reasons… is that people are very happy with the POCs. They can train them on curated data sets. But once it actually goes and hits real-life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.

Speaker

Dr. Vivek Khaneja


Reason

This comment cuts through the hype around AI adoption to identify a critical bottleneck – the gap between proof-of-concept success and real-world deployment. It highlights the often-overlooked complexity of data quality and MLOps in production environments.


Impact

This interjection deepened the technical discussion and provided concrete validation for Nitin’s earlier points about enterprise struggles. It moved the conversation from surface-level barriers to fundamental technical challenges, establishing MLOps expertise as a key capability gap that needed addressing.


Do I need a GPU in every instance is the first question that we are trying to ask. And today everybody has a CPU in their environment. So can they reutilize that? And can they test it out?… One example that I can kind of discuss with my customers is if you’re looking at, say, a prompt-based engine where you need to do a document search, typically a human eye can read about 10 prompts in a second if somebody is a fast reader. Now if a processor can give you 15 to 20 prompts, is this good enough as a performance for you or do you want 200 prompts in that particular second?

Speaker

Nitin Bajaj


Reason

This comment introduces a fundamentally different approach to AI infrastructure decisions by questioning the assumption that more powerful hardware is always better. The human-baseline comparison provides a practical framework for evaluating ‘good enough’ performance versus over-engineering.


Impact

This reframed the entire infrastructure discussion from a ‘bigger is better’ mentality to a cost-benefit analysis approach. It introduced the concept of ‘frugal AI’ and influenced how both panelists subsequently discussed deployment decisions, emphasizing fit-for-purpose solutions over maximum performance.


For me it is like when we were like about 150 in terms of data usage. Today we are number one in terms of data usage… If we can consume the data for improving or improving the general intelligence of public. that will make a large scale impact on the society and India at large. So in two or three years, if a Sabziwala can figure out how they can kind of up-level their state, that would be the best way.

Speaker

Nitin Bajaj


Reason

This comment provides a uniquely Indian perspective on AI success metrics, moving beyond enterprise adoption to societal transformation. The ‘Sabziwala’ (vegetable vendor) example grounds AI’s potential impact in relatable, grassroots terms rather than corporate boardrooms.


Impact

This final comment elevated the discussion from technical and business considerations to societal impact, providing a broader context for measuring AI success in India. It connected the technical infrastructure discussion to democratic access and social mobility, offering a vision that transcends typical enterprise-focused AI narratives.


There should be critical benchmark. We need to seriously look at how I can optimize my models to be more power aware. Can I have compressed models which take less energy?… there is this joke that we keep on saying that *** But along with the hyperscaler, you also need to have a small power plant which needs to be designed together with it.

Speaker

Dr. Vivek Khaneja


Reason

This comment introduces energy efficiency as a first-class design consideration rather than an afterthought, proposing specific metrics (energy per token) and acknowledging the scale of power requirements with humor that makes the technical challenge more relatable.


Impact

This shifted the sustainability discussion from abstract environmental concerns to concrete engineering metrics and design decisions. It established energy efficiency as a technical optimization problem that could be measured and improved, rather than just a policy consideration.


Overall assessment

These key comments transformed what could have been a typical policy-heavy discussion into a nuanced, technically grounded conversation about AI deployment realities. The most impactful moments came when speakers challenged conventional assumptions – whether about sovereignty requiring complete independence, GPU necessity for all AI workloads, or success being measured purely in enterprise terms. Dr. Khaneja’s candid acknowledgment of technical limitations and realistic timelines set a tone of pragmatic honesty that elevated the entire discussion. Nitin’s ‘frugal AI’ concept and human-baseline performance comparisons provided practical frameworks for decision-making. Together, these insights created a conversation that bridged the gap between policy aspirations and ground-level implementation challenges, offering actionable perspectives for both government infrastructure planning and enterprise AI adoption strategies.


Follow-up questions

What specific benchmarking should be established for energy consumption per token for training and inferencing across AI models?

Speaker

Dr. Vivek Khaneja


Explanation

This is important for establishing industry standards to measure and optimize energy efficiency in AI deployments, which is critical for sustainability and cost management


How can curriculum in colleges be redesigned to include practical MLOps training and real-world deployment scenarios beyond curated datasets?

Speaker

Dr. Vivek Khaneja


Explanation

This addresses the critical talent gap between theoretical knowledge and practical AI deployment skills that is preventing successful scaling from pilots to production


What is the optimal deployment model (edge vs on-prem vs cloud) for different enterprise use cases and how can ROI be accurately calculated?

Speaker

Nitin Bajaj


Explanation

This is crucial for enterprises to make informed decisions about AI infrastructure investments and achieve successful scaling beyond pilot projects


Do enterprises really need GPUs for every AI use case, and when are CPUs with integrated AI capabilities sufficient?

Speaker

Nitin Bajaj


Explanation

This question is important for cost optimization and practical deployment decisions, as it could significantly reduce infrastructure costs for many AI applications


What are the specific performance requirements (prompts per second) needed for different AI use cases to determine appropriate hardware choices?

Speaker

Nitin Bajaj


Explanation

Understanding performance requirements is essential for making cost-effective hardware decisions and avoiding over-provisioning of AI infrastructure


How can capstone projects be designed to handle beta-scale data with real-world constraints like missing, skewed, or dirty data?

Speaker

Dr. Vivek Khaneja


Explanation

This is needed to bridge the gap between academic training and practical AI deployment skills, addressing the talent shortage in production-ready AI capabilities


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.