From KW to GW Scaling the Infrastructure of the Global AI Economy

20 Feb 2026 15:00h - 16:00h

From KW to GW Scaling the Infrastructure of the Global AI Economy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how India can achieve AI sovereignty while rapidly scaling AI infrastructure and services, emphasizing that sovereignty and innovation must progress together and that true sovereignty requires control over both hardware and software [1][7-13][14]. Ankush argued that India’s aspirational stance will make it a global AI hub within months [2][3], and Nitin highlighted Google’s new Vizag data centers and an on-premise “indigenous data box” that delivers full Gemini AI capabilities while keeping data local [8][10-13]. He stressed that controlling both hardware and software is essential for true sovereignty [14].


Sudeesh described the IRCTC ticketing platform’s demand-supply mismatch and how advanced AI, built with both global and indigenous components, is used to detect and mitigate automated booking bots during peak tatkal periods [16-19][21-25]. He confirmed collaboration with Indian startups for data analysis and continuous monitoring [22-24]. Ankush explained that Bharat GPT follows an “AI with purpose and trust” approach, focusing on domain-specific, small-to-mid-size models trained on partner data rather than a generic large language model for consumers [28-34][36-38][41-44].


Addressing inclusivity, Nitin cited Google’s provision of free Gemini-powered JEE mock exams to broaden access for students in underserved areas [58-62].


Srirang introduced the concept of AI factories, noting that gigawatt-scale data centers are shifting from an “outside-in” to an “inside-out” design where workloads dictate infrastructure [71-74]. Peter and Jigar explained that speed at scale requires modular GPU pods, with rack power densities rising from 10 kW to over 240 kW and future megawatt-per-rack designs, enabling rapid deployment of AI workloads [108-112][118-121][160-163]. They stressed the use of reference designs and pod-level standardization to maximize utilization, reduce under-use, and simplify upgrades across GPU generations [252-259][262-267].


Srikanth and Sanjay warned that future-proof data centers must consider row-level density and modular pods to avoid costly retrofits, leveraging digital twins for design validation [665-667][666-670]. To support this growth, NVIDIA and partners are launching skill-development programs with Indian institutes and promoting off-site prefabricated systems to accelerate build-out while addressing energy-efficiency and PUE challenges [710-718][739-744][779-784].


Overall, the discussion concluded that coordinated efforts across sovereign AI models, scalable infrastructure, and talent development are critical for India to become a leading, self-reliant AI ecosystem [1][71-74][108-112].


Keypoints

Major discussion points


AI sovereignty and the need for indigenous solutions – The panel stressed that AI innovation must be coupled with data-sovereignty, highlighting Google’s new Indian data centers and on-premise “indigenous data box” that runs Gemini AI services inside the customer’s premises [7-14]. Ankush’s “Bharat GPT” is presented as a purpose-driven, trust-focused model built for Indian enterprises rather than a generic large-language model [28-38].


Applying AI to critical Indian services and promoting inclusivity – AI is already being used to manage massive demand spikes on IRCTC ticketing and to curb automated abuse [15-18][21-25]. Google is extending inclusive AI tools such as free Gemini-powered JEE mock exams for students across the country [58-62]. The discussion also touched on AI-driven fraud detection in subsidies and UPI transactions as examples of societal impact [298-310].


Scaling AI infrastructure: “AI factories”, GPU pods and gigawatt-scale data centers – Multiple speakers described a shift from traditional data-center design to purpose-built AI factories, emphasizing “speed at scale”, modular GPU pods, and reference designs that can be replicated across generations [99-108][119-130][158-166][210-218][245-258]. The goal is to move from 1.5 GW today to 10 GW+ within a few years [210-218].


Energy efficiency, PUE optimisation and future-proof design – The conversation highlighted the challenges of cooling high-density GPU racks, the limits of PUE as a metric, and strategies such as adaptive chillers for seasonal temperature swings [710-730]. Future-proofing requires thinking beyond rack density to row-level “bounding boxes” and integrating chip-to-data-center telemetry [665-668][739-748].


Building talent and skill pipelines for AI-scale operations – Recognising the talent gap, NVIDIA/Vertiv outlined training programmes with Indian institutes (e.g., IIT-Chennai) to certify engineers in operations, maintenance, and design of AI-optimized data centers [778-788][811-818].


Overall purpose / goal


The panel was convened to map India’s roadmap for becoming a sovereign, inclusive AI hub: showcasing current AI deployments, outlining the technical and infrastructural upgrades needed to support massive AI workloads, and identifying policy, sustainability, and talent-development actions required to accelerate the nation’s AI ecosystem.


Overall tone and its evolution


The discussion began with an optimistic, visionary tone-emphasising India’s aspirational role and the promise of sovereign AI [1-3][28-31]. It then shifted to a pragmatic, solution-focused tone as speakers detailed concrete technical measures (data boxes, GPU pods, reference designs) [7-14][99-108][245-258]. A later segment adopted a more cautionary yet collaborative tone around energy, cost, and design challenges [710-730][665-668]. The conversation concluded on an encouraging, forward-looking note, stressing partnership, rapid deployment, and skill-building [378-386][778-788].


Speakers

Ankush Sabharwal – Role/Title: (not explicitly stated in the transcript) – Area of expertise: AI sovereignty, Bharat GPT, AI strategy for India. [S6][S7]


Akanksha Swarup – Role/Title: Moderator / Host of the panel – Area of expertise: Interviewing, moderating AI-focused discussions. [S13]


Nitin Gupta – Role/Title: Google employee (speaking on behalf of Google) – Area of expertise: AI services, data-center sovereignty, Google Gemini, on-premise AI solutions. [S16][S17]


Sudeesh VC Nambiar – Role/Title: IRCTC representative (AI & ML for railway ticketing) – Area of expertise: AI-driven demand-supply management for railway bookings.


Srirang Deshpande – Role/Title: Strategy lead for India, Vertiv (managing Vertiv strategy & market development) – Area of expertise: Data-center strategy, AI-infrastructure planning. [S8][S9]


Moderator – Role/Title: Conference moderator – Area of expertise: Session facilitation and audience interaction.


Peter Panfil – Role/Title: Vertiv senior executive (panelist) – Area of expertise: AI factories, GPU-centric data-center design, speed-at-scale deployment. [S23]


Jigar Halani – Role/Title: NVIDIA representative / industry veteran – Area of expertise: AI factories, GPU infrastructure, AI model deployment. [S18][S19]


Srikanth Cherukuri – Role/Title: Vertiv executive (panelist) – Area of expertise: AI-factory blueprinting, GPU-first design, future-proof data-center architecture. [S24]


Sanjay Kumar Sainani – Role/Title: Senior Vice President, Technical Business Development, Vertiv – Area of expertise: High-density AI data-centers, power & cooling efficiency, scaling AI infrastructure. [S4][S5]


Audience – Role/Title: Various audience members asking questions – Area of expertise: Varied (AI inclusivity, AI-human interaction, AI ecosystem in India, talent development, etc.).


Additional speakers:


None identified beyond the speakers listed above.


Full session reportComprehensive analysis and detailed insights

Opening remarks – AI sovereignty


Ankush Sabharwal opened the panel by asserting that India’s ambition to become a global AI hub must rest on “complete sovereignty in terms of AI and not just the platform” and that this transformation will happen “in a few months, not years” [1-2]. He framed sovereignty as inseparable from innovation, a view echoed by Nitin Gupta who said “sovereignty and innovation … have to run together” and that the two are not mutually exclusive [7-9].


Google’s sovereign-AI offerings


Google outlined its contribution through the announcement of new data-centre capacity in Vizag, which will keep “innovation and any data-residency things … within the boundaries of India” [8]. More importantly, Google introduced an “indigenous data box” that resides entirely on a customer’s premises yet delivers the full suite of Gemini AI services, giving users the power of a Google data centre “inside your own premise” while also “controlling the hardware” [10-14].


Inclusivity and the digital-divide


Moderator Akanksha Swarup asked how Google would address the digital divide and serve under-privileged and rural populations [49-53]. Nitin Gupta responded by highlighting that Google has made “JEE Main exams, mock exams … available on Gemini free of cost for any student to try”, positioning the offering as a step toward democratising AI-enabled education [58-62].


IRCTC ticket-booking AI use-case


Sudeesh VC Nambiar described the severe “demand-supply mismatch” on the IRCTC ticketing platform during peak tatkal windows (8 am, 10 am, 11 am) [16-19] and explained that a “very advanced AI solution … maybe the best in the world” is being employed to detect and curb automated booking bots [21-25]. The solution incorporates a “layer of indigenous” models and collaborates with an Indian startup that performs continuous data-analysis and social-media monitoring [23-25].


Bharat GPT – purpose-driven AI


Ankush Sabharwal then detailed the philosophy behind “Bharat GPT”. The tagline “AI with purpose and trust” reflects a focus on solving specific enterprise problems rather than creating a consumer-facing large language model [28-34]. The development process follows a “begin with the end in mind” habit: model size is chosen based on the use-case, data are sourced from partners, and domain-specific models (e.g., for railways) are trained using the partner’s existing knowledge [36-44][45-46].


AI factories and inside-out design


Srirang Deshpande introduced the concept of “AI factories”, noting that the industry is moving from an “outside-in” to an “inside-out” data-centre design where workloads dictate the infrastructure [71-74].


Infrastructure at scale – chip-first philosophy


Peter Panfil and Jigar Halani expanded on this by describing “speed at scale” as the need to design from the GPU chip upward, creating modular GPU pods that can be replicated rapidly. They cited the evolution of rack power density from ~10 kW to >240 kW and the prospect of “megawatt-per-rack” designs, which would enable a single hall to serve a substantial portion of India’s AI demand [106-112][118-121][160-163].


Reference-design “magic numbers” from Vertiv and NVIDIA allow a pod to support three GPU generations without redesigning power or cooling infrastructure [252-259][262-267]. Jigar Halani stressed that focusing on “row-level” or “data-hall-level” density-rather than individual rack density-prevents costly retrofits and aligns with the “bounding-box” methodology [665-667]. Sanjay Kumar Sainani added that a pod can be built to a fixed power and liquid-cooling capacity (e.g., 2.4 MW or 6 MW) and later re-configured for newer GPUs, thereby future-proofing the deployment [692-699].


Energy efficiency and telemetry integration


Sanjay Sainani warned that Power Usage Effectiveness (PUE) can be “misleading” because raising ambient temperature artificially improves the metric while increasing overall power consumption [710-720]. He advocated seasonal cooling strategies-using free cooling in winter and supplemental chillers in summer-to optimise the annual PUE across India’s diverse climate zones (10 °C to 48 °C) [730-738]. Srikanth Cherukuri echoed this concern, noting that current data-centre telemetry does not communicate with chip-level telemetry, and that integrating the two would enable automated, real-time energy optimisation [739-748]. He also highlighted the use of digital twins to simulate entire pods and verify that “the whole pod as one big block” meets power, thermal and redundancy requirements before construction, reducing the risk of over-building [739-748].


Future-proofing and modularity


The “bounding-box” (row-level) design philosophy, together with the reference-design pod approach, allows multi-generation GPU reuse and avoids expensive retrofits [665-667][252-259][262-267]. Sanjay Sainani reiterated that a 1 MW rack is imminent and that a megawatt-scale rack could replace the footprint of an entire 1 MW data-centre [692-699].


Talent and skill development


The moderator described an 8-to-12-week training programme in partnership with IIT-Chennai that equips engineers with the skills to operate and maintain AI-optimised data centres [778-784]. Srikanth Cherukuri and Sanjay Sainani reinforced the importance of prefabricated, modular systems (e.g., Vertiv “smart-run”) that can be assembled off-site, allowing parallel development and testing, and thereby shortening build-out times [796-804][811-818].


Audience Q&A highlights


Questions from the audience covered AI consciousness, India’s five-layer AI stack (energy, infrastructure, compute, models, applications), and statistics showing that India generates ~20 % of the world’s data but hosts only ~3 % of global data-centre capacity [910-918]. Nvidia-ready / DGX-ready certification and the push for purpose-built AI factories were also discussed [842-850]. Additional examples of AI for agriculture and fraud detection were mentioned [896-904].


Closing remarks


Peter Panfil concluded by reiterating the “chip-first” approach: design should start from the GPU chip and then define the supporting power, cooling and rack infrastructure [106-112]. The panel collectively emphasized three pillars for India’s AI future: (1) speed at scale through modular AI factories, (2) sustainability via energy-efficient, row-level designs and token-per-watt metrics, and (3) a skilled talent pipeline supported by industry-academic programmes. Together, these elements position India to build sovereign, high-density AI infrastructure rapidly, while addressing energy, talent and ecosystem challenges.


Session transcriptComplete transcript of the session
Ankush Sabharwal

having the complete sovereignty in terms of AI and not just the platform. I think India being so aspirational and ready to adopt new technology for the welfare of themselves and the welfare of the businesses, I think we would be the hub of AI development for the world. You will start seeing that happening in a few months, not years.

Akanksha Swarup

It’s actually heartwarming to hear that from someone who’s actually fronting India’s AI story at the moment. Nitin, as someone who is at Google, how do you see this for India? Do you think India has the right infrastructure, the right resources to build its own sovereign AI at the moment?

Nitin Gupta

First of all, thank you, Corvo team, Ankush, for inviting me here. And, you know, I’ll be very happy to share my views. from Google perspective and from my personal perspective I feel yes sovereignty is very important but at the time with the sovereignty it is not a question between sovereignty or innovation it is sovereignty and innovation they have to run together they can’t be one choice versus the other one and with that Google while we have our entire data centers in India you have heard three months back we announced that we are going to be building big data centers in Vizag the announcement happened so we are ensuring that let’s say if any innovation and any data residency things are there they are being kept within the boundaries of India But then those data centers are definitely empowering the lot of AI, but they are for everyone, for all type of personas, whether they’re government, enterprises, startups, students, colleges, universities.

We understand that, you know, sometime there are going to be critical data which needs to stay even more secure. And for that, Google has created a completely indigenous data box which completely stays inside the customer premise and is fully powered by AI. So imagine that you have the full potential to run what you’re running in a Google data center, but inside your own premise. And that has full Google Gemini AI services. And that’s the definition we have for sovereignty, where you are. Also. Also controlling the hardware, not only what’s running on that hardware.

Akanksha Swarup

All right. So this IRCTC is one of the most heavily used websites in India. my data research says close to 50 million users visiting every month on an average correct me if I am wrong but how are you incorporating or leveraging the use of AI especially in peak periods when you look at say tatkal booking time when the traffic actually dramatically peaks up

Sudeesh VC Nambiar

yeah so we had tremendous mismatch of demand and supply as far as railway ticketing is concerned so we have the peak the morning 8 o ‘clock when the ticket is opened for the 60 days hence travel then 10 o ‘clock the AC tatkal and 11 o ‘clock the sleeper tatkal so there is a lot of huge demand and there is a demand supply mismatch as of today so people try to misuse and use the automated tools for accessing it So this is a constant, I would say, cat and mouse game we are sort of playing. And we are using AI also. We are using AI of very advanced AI solution. Maybe they are said to be the best in the world solution we are using

Akanksha Swarup

Any indigenous models are used?

Sudeesh VC Nambiar

Indigenous, of course, we have a layer of indigenous. There is a startup also who are doing the analysis, data analysis, and they constantly monitor the social media and see what is happening, what is the strategy. So it is basically a collaboration between the Indian startups and the global technology strength of a global company. So we are using AI and ML -based model. The model constantly learns and tries to… mitigate those automated…

Akanksha Swarup

Okay. Ankush, what differentiates Bharat GPT in terms of its vision when you compare it to say global models like ChatGPT or even Gemini and especially how is it curated for Indian citizens and enterprises? What is that differentiating factor?

Ankush Sabharwal

Yeah, see, our tagline is AI with purpose and trust, right? So whatever we are doing, so I had read that book, Seven Habits of Highly Effective People very early on in my career. So begin with end in the mind. We always think what’s the use case? What’s the problem you’re going to solve? And then see what kind of model you need, tiny, small, medium, large. And then you see, okay, from where the data would come. See, the Bharat GPT family of models, right? It’s not the large language model. It’s not ready for consumers yet, right? So we work with our partners, get their data and train the model for their users because we believe we…

is easy for us to solve the problem of enterprises because the enterprises say like IRCTC, they already know their domain. We cannot learn, right? And if you say, hey, I can create travel AI solutions, very, very difficult, right? So they know travel, they know railways. So it would be, I think, much better to work with them, learn from them. They already know, they are already solving a lot of problems and they also know the problem, the real problem. They don’t have existential crisis, right? So they are not just in the game of valuation. So they are solving the real world problem.

Akanksha Swarup

That’s why we have him on stage with you today. He’ll share those precious tips. My last question, since we are running short of time, Nitin, I think this is also not to highlight the achievements, it’s also to perhaps highlight the concerns. And right now, one concern which the Indian Prime Minister has also highlighted is that of inclusivity. How is Google trying to bridge that divide as far as you can see? As far as digital divide is concerned, how do you make Google more accessible for the underprivileged, for those in rural areas? Nathan, before you answer, it should be shortened. I have my colleagues from other team, Vertex. I would like to apologize to them for this delay, but allow us just to wind this up.

Nitin Gupta

Yeah, I’ll take a minute. Okay. So, great question. And, you know, Google has always been, you know, in the forefront of inclusivity, whether you call it Gmail, whether you call it search. You know, it is empowering billions of users every day. And just to summarize and give a recent example, we have very recently Sundar Pichai has announced that the JEE main exams, mock exams are available on Gemini free of cost for any student to try. That’s the inclusivity we want. We want to make sure that student at his home can keep on trying the mock test at free.

Akanksha Swarup

All right. Amazing. Amazing. Which is inclusive. Inclusive and democratic. Many thanks to you three gentlemen. It was a pleasure having you all over here. Thank you so much.

Srirang Deshpande

Good morning to all of you. As Rakesh has already introduced, two companies are planning for a lot of things together. As I said, I am part of strategy for India and managing Vortiv strategy and market development. The important thing, what we are bringing today for you is, as we see a lot of gigawatt infrastructures are getting announced, and that poses a lot of challenges for us. Till this time, data centers are getting built from outside in approach. and then now time is there or time has come where data centers are getting filled from inside out approach. So it’s first GPU gets decided or the workloads get decided and then the whole infrastructure gamut comes into the picture.

To discuss this, I have two friends, two industry veterans from Vertiv and NVIDIA to discuss the Fireside Chat. So we have a Jigar, I think by this time Jigar is already known to the industry because immense contribution Jigar has done for the AI ecosystem in India working with all the ecosystems, all the layers, infrastructure application, use cases and so on and so forth. He managed solution and engineering for India in NVIDIA and I have another my friend Peter Panfil . Peter is Encyclopedia in Vertiv. He is based in US. He is our senior vice president for technical business development. And he’s the one who’s involved into many designs, a large scale data centers and gigawatt designs.

I would request Jigar and Peter, please come on the stage. Let’s have a round of applause for Jigar and Peter. So, Jigar and Peter, it’s all yours now. Go to

Peter Panfil

Thank you. Thank you. Thank you. So, my friend, we got our introductions. Let’s see. Are we on? Are we on? You guys can all hear us? Yeah? Can you hear us? Good? We’re good? Okay. All right. So, my friend, great to see you. Great to see you. So, I got to start with how we would normally end. I believe that any discussion like this should start. with us telling you what we think you’re going to get out of this. So what key message or messages do you think this audience needs to hear before we get started? And then we can spin off of that and go into the kinds of details we really need to.

So where do you think, what do you think this audience is the most interested in?

Jigar Halani

Okay. Am I audible? Okay, great. So I think as the topic suggests as well, my view, what you will get to hear us next 30 -35 minutes is about why this AI is becoming so much of notion for every country. What is that is the building blocks of these AI factories and the sovereignty aspect of it? What is it two of us are trying to contribute in this journey for everyone for that matter? And how do we scale? and make it work for everyone, to make AI for all, how India wants to call it as, AI for all, is what I feel we should be discussing about here. Because that will be most relevant for the conference, for the audience, and what we can contribute back to the humanity as well.

What are your thoughts?

Peter Panfil

I agree with you completely. So the three things I feel are most relevant are speed at scale. Now, it’s not just the speed of the compute. It’s the speed of deployment. The faster we can get the GPU structures in place, the faster we can benefit from it. And scale, you and I talked about the scale. And you’re going to quote some numbers, I think, that the tops of their heads are going to blow off. But speed at scale. The second thing is we’ve got to stop. We’re not thinking the way we thought in the cloud world. In the cloud world, we were thinking a high -density rack was 10 kilowatts. And that we would start at the source, at the grid, and work our way to the chip.

What I’m here to advocate for you to do is start at the GPU. Start at the chip. Let’s start at the chip, define the most economical, most efficient, fastest from a compute perspective, and figure out how to deploy that as a pod, then replicate that pod, and achieve the speed. And the third is, don’t be scared. We got it covered. We got you covered. We know how to do this. This, we made a big, I got to just tell you, I told you this in the hallway. Vertiv made a big bet. with NVIDIA. We made a big bet. I actually reassigned myself. I was leading what we call a GSA, Global Strategic Account Pursuit Team, and I said, if we’re going to do this right, we’ve got to immerse ourselves in GPUs, understand how to deploy them, understand what drives our customers, and how we’re going to make them successful.

And I think that that has worked to both of our benefits.

Jigar Halani

Absolutely. And through the humanity as well, right? We are fundamentally changing everything that has been pursued so far, and you bring out the cloud part of it. I was just thinking while putting my hand on my beard that only a few hairs were white back then. It’s not that far that I’ve seen the retrieval clouds. We store the information and we’re just retrieving it to process the application to get us the information out, right? to the world of now generating every single time a new data and processing it right there to give you all the time a new input and a new output, right? Because prompts are new, the outputs are new, and thereby the world sees every time something different which is getting processed and being delivered to the customers, right?

So such an amazing and a fastest -paced change of how these clouds have emerged and what are your thoughts in terms of what this space is all about, how our customers are keeping up with this, and what are we contributing in that journey, if you can throw some light towards that.

Peter Panfil

Sure, that’s great. So first of all, it comes with understanding and having a transparent provider that says, here is what I’m producing today, here’s what I think I’m going to be producing a year from now, here’s what I think I’m going to be producing two years from now. Now, our goal is to make every deployment that you take on an AI factory. We all know what an AI factory is, right? An AI factory, think of it as a car factory, washing machine factory. Just, it’s a data factory, okay? And so our goal, I will just tell you, our goal along with your team is start as an AI factory. Yes, you might want to have mixed mode CPU and GPU workloads in your facility, but you’ve got to pilot the GPU configurations, at least pilot them.

When I say I reassign myself, I was working primarily with cloud providers, mostly hyperscalers, and they had a prescriptive formula. You know, they had their hacks, their number of racks. They would deploy them. We all knew which ones they were. Now, we can take a GPU pod, design it once, build it many, and apply it to the GPU that we need from that generation. It’s a complete change in the way we think about how to deploy the IT.

Jigar Halani

That’s so true. By the way, did you notice, every time we are talking about GPU, the screen is blinking. There you go. I think that’s a good message.

Peter Panfil

I think it’s because I owe somebody a nickel every time I use the letters GPU. It must be trademarked somewhere, all right? So I owe them a nickel. Okay, all right.

Jigar Halani

No, so I think the transition that we see because it’s generating something new every single time, the compute demand because of which is just exploding, right? And thereby, the possibility of what we could do more and new is every time becoming bigger and better, essentially. Right? And with that, I think the journey of data center is also evolving much more faster than what we have thought, right? You mentioned it, 10 kilowatt to 15 kilowatt, not that far. We were talking about this about four or five years back. To 40 kilowatt, what we transitioned it to it, to now to 120, 130 kilowatts. And as we announced it in January, we are now talking about 240, 230, 210 kilowatt per rack, which means this size hall could probably run a great portion of India with so many services that is probably never imagined before.

Peter Panfil

So I think it’s interesting that you comment about that, because one of the things that we’ve heard back from our customers who first do a lot of research, how do they take their critical infrastructure from CPU -based to GPU -based? And I think that’s something that we’re seeing a lot of growth in. First, there’s that transition to liquid. Don’t worry about it. We’ve been doing liquid cooling for 40 years. We know exactly how to manage it. Then there’s the density of the compute itself. I’m amazed at how quickly and easily our customers understood the move from a 10 -kilowatt rack to a 130 -kilowatt rack. I credit you all. So if you’ve already made that transition, I credit you.

You’re doing a spectacular job. Our job is to prepare you to have that go up by an order of magnitude. Not right away, but in future generations of compute. And so what we try to do is we try to prepare you for future -ready thinking. I know you don’t want to think three years down the road. You can do it. You can do it. You can do it. You can do it. You can do it. You can do it. let’s at least think three years down the road in three years based on the rate of what you’re seeing what we’re seeing both here in India and around the world

Jigar Halani

my perspective is I think all reports are talking about 5, 6 gigawatts kind of a number over the next three years my personal understanding from the lens I look at it both from NVIDIA as well as what industry and government is trying to do my anticipation is we will cross 10 to 12 gigawatts in the next three years and that’s not far and I’m not going by any of the announcements that has been made in the last three years. I know where the reality stands in terms of what inferencing and training workloads. I repeat, I started with inferencing. I did not start with training.

Peter Panfil

Yep. I noticed that.

Jigar Halani

The reason is we are a consumer country. Make a note of that. Yes. Right? He started with inferencing, not learning. Yes. All right? Because we are a consumer country. We have always been in the mode of first to consume, then to build. And thereby, we are the largest chat GPT consumer base for the globe. We are the largest for public city. We are the largest for even for Gemini as well. Right? I think we were number two about a month or so back. But my view is with this geo announcement, we should have crossed by now number one position. But the delta was pretty small. Right? What does that mean? That means. If that entire compute capacity.

that is currently not getting processed in the country should come back to India because of the DPDP law that has got enforced last month or so, then this number will be even higher. And we are very democratic that way. You know, we are not closing the doors for any businesses. We have never done that. I’m sure, knowing the country, we will never do that with this leadership that we have from Prime Minister Modi. That means we will still allow these processing to happen outside of India, but at the same time, we will do the regulatory reasons of some of the verticals, say, fintech, say, healthcare, defence and so on and so forth, or some of the government, you know, bodies.

Even if they start to do influencing locally, this number will easily touch 10 plus. And I’ve not included the industry at scale yet, which is what Anthropic and J &J, Gemini and others are trying to even capture it from that market perspective. So my understanding, it should cross 10. while all the reports are talking 5, but India will

Peter Panfil

So it’s amazing. We didn’t compare this note before we got on this stage. What was the number you gave me just 20 minutes ago? 10, right? So let’s think about that just for a second. We’re at 1 .5 now. We’re going to get to 10. So to get to 10 in that three – to five -year horizon, we’re going to have to scale pretty far, pretty fast. We’re going to have to draw on our shared expertise. And by drawing on our shared expertise, we’re going to help be a trusted advisor to you. Who’s your trusted advisor? I’ve got my trusted advisors. Somebody. Somebody I can always go to, and they’re always going to give me. the right answer. It might not be the answer I like, but they give me the right answer.

So what we want to do is make sure that you know, we understand how to scale. You understand how to scale. We understand how to scale. I think if we’re talking about doubling, getting to 10 in three years says we double every year starting this year. My one and a half goes to three, my three goes to six, my six goes to 12. We’re doubling every year. Now, if I was to take you outside of India now, North America market, when the North America market first started becoming aware of GPUs, there was a wide variety of acceptance. There were the folks that said, yep, I want to be there. And I want to be there, and I want to do a pilot with you.

I want to design a pilot that I can replicate into all of my either hyperscale or multi -tenant data center environments. The other thing they wanted to do was no data center left behind. They didn’t want to leave behind any capacity because they knew capacity was going to be the currency. They knew power and land and GPUs, that’s where they needed to be. The third thing was their project scales moved. We used to live in the cloud world at project scales of 18 months. We live now in the GPU world of project scales of between four and six months. So a dramatic compression of schedules, a dramatic increase in capacity, what does that mean? We’ve got to build capacity at a faster rate.

Than we ever have before. and I know we’re up to it. We’ve added the capacity we need to be able to support that kind of demand.

Jigar Halani

Peter, that actually brings to a very good question. When we talk about this at scale, and you said that in the U .S., you guys have already started to build this at scale because you see this as a great opportunity, essentially, and India is yet to build, right? In all fairness, I think some of the largest clusters are in 10Ks of GPU, essentially, right? While in the U .S., we’re talking about millions of GPU in a single data center, essentially. Would you like to throw some light on some of the learnings and, Kodi Kakar, a quick bite for the audience to know what are those quick things that India could do in terms of having these things done in, let’s say, three to eight months’ time frame, not just the project planning, not just the understanding of BOQs, not just the understanding of…

who is going to deploy my project and how does the project look like and the 3D version of that. How do I get the entire project done in, let’s say, six to eight months’ time frame? Including from land is what I have, and from there onwards, GPUs running and hugging and making the production environment happen.

Peter Panfil

shifted to 250. Now, along the way, we said, okay, let’s take these 10s and put them together and make a 50, and let’s take the 50s and put them together and make 100, and let’s put the 100s together and make a 220. Shoot me now. What we found is, let’s pick an optimum building block that supports the number of GPUs that is, I’ll call it, reasonable at scale, don’t take a design that has never been created before. Let’s take a design that we have a good basis on. For example, the pod. You just published some standards on pods. Reference designs.

Jigar Halani

Reference designs, okay.

Peter Panfil

We worked closely with your team on reference designs. We came up with the magic numbers that are reference designs that minimize the amount of, of underutilization, so maximize the utilization and make them the most efficient. I’ve been an advocate for efficiency within the data center space my entire career if you save a watt that’s a watt you don’t have to generate at the source you don’t have to distribute it you don’t have to reject it so the fewer watts you lose and the more watts you can put into the compute the more tokens I can generate and so our goal our goal in working with the GPU I’ll call it the AI factory mentality is how much power can we deliver from the source to the GPU as much power as we possibly can and how can we deploy that physically as quickly as we possibly can and it boils down to take the reference designs we’re not saying all the designs are going to be the same we know that’s not going to be the case so but I could show you a pod design it’s part of the reference design I could show you a pod design that supports three generations of GPUs so this year, next year, next year after that three generations of GPUs just by changing the way those pods are populated on the compute side and in fact we’ve got one customer who wants to be able to seamlessly mix GPU platforms within a pod he says I’m going to have one compute line up number one as one generation of GPUs, pod two is another generation of GPUs third generation of GPUs, so they want to be able to seamlessly move between GPU generations because at some point they’re going to optimize particular functions and particular outputs and services against a GPU platform.

Jigar Halani

You just brought up a perfect point, right? So a few things are why it’s important to follow the reference design. Just to bring everybody on the same page, CPU world was very different. Having a node down means few hundred dollars getting downtime. A GPU node down translates to few thousands of dollars going into a downtime, right? And the fortunate or unfortunate part is if your training workload is running, if a node fails, you start from the checkpoint that you have done it. Assume that your checkpoint was done eight hours before. For eight hours of, say, 4 ,000 GPUs of time multiplied by that much is what you have lost the compute time in the cloud. Unfortunately, that translates to…

Peter Panfil

Real money.

Jigar Halani

Hundreds of thousands of dollars. Real money. Right? Real money. So while you as a cloud provider might be thinking, and I’m talking about both the sides, that, hey, let me do a little bit of cut corners, do something here, something there, and I’m still making the cluster up and running. But you know what? That’s going to cause a lot. And customer may not have SLAs with you in that direction because these are not the standard SLAs we’re talking about, right? What the world has seen in the typical cloud world. These are different type of SLAs that customer signs with you. And, you know, if it’s an inferencing workload and if it’s critical with the enterprises, we’re talking about down times, which is, again, by all law of cloud, is not acceptable.

But the key question could come in that, hey, why do I need these large -scale clusters only for training? Is that the only thing I do it? The answer is no. I don’t know, but I’m sure most of you might be following what Jensen talks about it. The three scaling laws that we have it. We’ll not go. We’ll go into detailing of it. I think Jensen has mentioned it like at least 100 times of his keynote. But in a simple term, if I have to tell you, let me take one or two good examples from the country itself, right, and what we announced in the last three days. So, taking a very simple example, as everybody knows, we are 1 .4 billion people, right?

Half of the audience is associated with farming in the country. Half of the audience or the, you know, citizen base is associated with the, you know, farming, and thereby one -third of the families of the country are completely aligned to the farming aspect of the story, right? They contribute just 15 % of our GDP, but half of the population does and associates with the farming, right? Now, government of India has two simple applications that has been launched, right? One is to check the subsidized… Food, you know, that government gives it to these half of the, you know, audience, half of the citizens in the country today. subsidized to the level which is a cent or two. In Indian rupees, it is one rupee to five rupee, how government gives it.

And a feedback call goes to all these citizens, asking how was the quality, did you get the right quantity, have they done any kind of fraud, and so on and so forth. A call per day, if government has been able to scale in the last one month to about 50 ,000 calls a day to citizens through a bot, which is talking in a local language, has been able to save a fraud work of around per day, and I’m talking about per day, in the range of a couple of millions of dollars. Okay, take that fraud. Talk about financial fraud. This would be another one, right? Because we are the world’s largest online payment transaction country. We contribute 50 % of the digital, and that’s…

by the NPCI data, globally acceptable, and it does at free of cost in this country. We call it as UPI, right? Most of the Indian people would know about it, right? And imagine the innovation that takes place in the fraud, you know, when the UPI transactions are taking place, right? And I do this transaction using mobile, from your mobile to your mobile, in a fraction of milliseconds. That data is in hundreds of millions, right? To prevent this fraud is where the AI is getting used. Now, if I’m putting a couple of hundreds of millions of dollars for five years as an initial investment, think of the economic benefits and the money back that I’m giving it to the citizens by not having these frauds.

And thereby for each of these applications, right? I have another good example, which is we have 22 official languages speaking in 500 dialects in the country, unofficial languages. So, officially, we have over 100 plus languages in the country. right government of India has an application called bashingi which does basically the translation you know and ASR and TTS in all different languages of India and government of India and state government has about 10 ,000 websites that government runs it we have only touched 1 ,000 and we are already hitting 100 million requests per hour right and this translates to in a simple term roughly about 2 million 2 megawatts of data center consumption per minute right in in 2 megawatt if I’m able to cater to 100 million requests a minute that’s massive

Peter Panfil

yeah look at the productivity improvement and I’m bringing it to the nation’s and this is just thousands of those websites of government of India so we yeah so we we talked we started with scale at speed Okay. That’s where we began. It’s not just the scale of the data center environment. It’s the scale of the applications and the benefit that they’re going to bring when they get fully populated.

Jigar Halani

Absolutely.

Peter Panfil

So I’m going to put you on the spot. How much do you think, on the journey right now, where are we? Are we at 3%, 5%, 10 %? I will tell you, I cannot wait for AI to take every mundane task I have to do in every day of my life and just do it. Okay? And then once those mundane tasks are out of the way, I can use every gray cell up here for productive work.

Jigar Halani

Absolutely. Absolutely.

Peter Panfil

So where do you think we are in terms of that scale?

Jigar Halani

But you touched upon a good point, how Meta calls it as personalized AI for everyone. So we are getting there, right? But in terms of data, and I think even Minister made that announcement yesterday when he was talking at the inaugural. He gave a nice statistics talking about we as India generate 20 % of world’s data, and what data center capacity the country has today is 3 % of the world’s data center capacity. So which means even if I don’t assume data generation speed over the next 3 to 5 years, even if I keep the data rate at 20 % only, and we are a young population, so we are bound to generate more data, and ours is the cheapest data rate in the world for 5G that we have, but assume that we don’t.

We don’t generate much of data, right, and we restrict it. We still have a long, long way to go in building the… large -scale data centers just to make sure our own data we process it by ourselves. Right? And that’s where the whole theme of this sovereignty, what government is talking about at least let’s protect our data. It’s more critical, not the general data. And that’s where the gigascale is more important.

Peter Panfil

But I don’t look at the sovereign data center approaches so much as a protection. I look at it as, where’s the most efficient place to process the data? It’s where the data is generated. The most efficient and effective place to process data is at the source of the data. Absolutely. And we are limited by energy. So we want to protect that layer as much as possible. And so I see a world where the data gets generated, it gets processed as closely and as quickly after the generation of the data. It’s used to further improve the performance and generation of subsequent data. So that data gets cleaned up as it goes. It gets more refined and more accurate.

We all know we make good decisions with good data. We know that. We make bad decisions with bad data. So the real issue here is we’ve got to take the data. I won’t say that our data is not clean now, but it’s not. Okay? I mean, you spend, give the audience an idea when a model is being put together. How much of the time is actually in cleaning the data and pre -processing of it, and how much of it actually goes into the language model itself?

Jigar Halani

So just to build, because India just announced 10 of their foundation models, cleaning of data typically is three to six months of journey on thousands of GPUs. for a language model that we are trying to build it in, right? If it’s a specific model for a particular task or a vertical that we are trying to build it in, and if the data is more notorious in terms of having more videos and images and stuff, it could be even longer.

Peter Panfil

Got it.

Jigar Halani

Right? And then comes the foundation model building itself and, you know, conversion of the model. That’s another 6 to 12 months of journey. It depends on the model size and type that you are trying to build.

Peter Panfil

So it could be a third of the time of realizing my language, my large language model, is cleaning of the data.

Jigar Halani

That’s correct.

Peter Panfil

Processing that data. Now, once it’s there, I’ve got a solid foundation of data to use for future models.

Jigar Halani

That’s correct.

Peter Panfil

Okay. So, again, I think if we’re talking about it in terms of percentage, are we 5 % there? Are we 10 % there?

Jigar Halani

So I would not put percentage. The reason is what type of model we are trying to build it in. depends on that. If it’s a language model, I think specifically to India per se, I will not comment about other countries because it depends on where their journey is in their data building. But India, in my view, has already nailed the data creation for a mid -sized model, how we call it, as a small to a mid -sized model in play. And they’re going to make it open source as well, how it has been announced. So, I will not claim it that we have a very large data set for a very large model, but a small to mid -sized, I think in the last one, one and a half years due to this India AI mission, we have been able to generate a pretty good amount of data and a pretty amazing clean data.

Peter Panfil

Perfect. Alright. So, I’m getting a hook from the guys in the front row. Okay. Yeah. I run long. I’ll always run long.

Jigar Halani

Sorry, I’m pausing you there. And I want to diverge a little bit right, asking as an India person. and I’m an Indian first, then I’m an Indian, what is that Vertiv is trying to contribute in this journey for, let’s say, India to begin with, and for the globe as well, you know, in the building blocks of these things that we are trying to do for these gigawatt -scale data centers? If you can throw some light.

Peter Panfil

Sure.

Jigar Halani

I know it’s a little bit silly question.

Peter Panfil

No, it’s not a silly question.

Jigar Halani

No, no, it’s not. We want to push manufacturing. We want to push, you know, India ecosystem, be more indigenized as much as possible, be more self -reliant. I want to know what Vertiv is trying to do.

Peter Panfil

So Vertiv is investing in people, in process, in production capacity.

Jigar Halani

Amazing.

Peter Panfil

Our goal, our goal is to build as much of the critical infrastructure here in India as we possibly can. And what that, it starts with. Working with our partners and our customers on first pilots and then production. So that production, you’re going to benefit. I will just tell you, India, you’re going to benefit from the mistakes that have been made in other regions for the last 12 to 18 months. So you’re going to benefit from that. You’re going to be able to jump right to it, all right? All right, so here’s the sum up. I asked you to give what you thought the panel should, this discussion, they should get out of it. What should they have heard from us that you want them to keep in their minds for the rest of the day?

Jigar Halani

For the rest of the day?

Peter Panfil

Rest of the day.

Jigar Halani

My view is, and I know it’s going to be a mix of audience here, you should be listening to it. How are these building blocks of AI factories that are getting to learn from the globe that India could adopt it fast, followed by? Followed by what’s happening in the modern world, because that’s the most and the fastest. building things that is happening in the world and the most fascinating because changing the world is so fast, followed by how are these models getting deployed, right? And what are the applications which are changing our world on a day -to -day basis? And fundamentally, the businesses are getting challenged on how they have been operating it for decades or centuries, right?

Versus how they could do this business today. If I would be you as audience, and that’s what I’m trying to do, being the audience as well, trying to constantly learn from this conference, what’s that the people who have done this at scale, what can I learn from there that I can deploy back in my country, in my profession, in my day -to -day life as a learning, is what I’m trying to do. And that’s what I would recommend everyone else to do as well.

Peter Panfil

Perfect. So let me add on top of that. It’s scale at speed. And it’s not just speed of build, it’s speed of compute, it’s speed of adoption. Yes. Second, stop thinking grid to chip and start thinking chip to grid and let the chip help us define what that critical infrastructure needs to look like and the third is we’re going to make it as sustainable as we possibly can because a lot that I don’t waste is one I don’t have to generate transmit or reject alright I think I think you’re up next Any questions? Can we have time to take questions from this? Okay, we have. Can we? Okay. Okay, we have one end up. She’s going to run a mic over to you.

Yes.

Audience

Hi, my name is Ani. I have a question. As I can see…

Peter Panfil

Use your outside voice. That’s what my family always says.

Audience

As I can see, everywhere is AI. And in today’s era, it is totally about AI. So, as you also said that this is a… AI whereas everywhere industry and company and education in every sector using AI. So the day is not getting far once AI humans is totally dependent on the AI and once AI is in the subconsciousness as humans thinking as humans. There is any chance where humans and AI both are in the same niche?

Peter Panfil

So I think that early on AI got a bad rap. It was going to be the computers were going to take over and blow up the earth. That’s not what we’re finding. What we’re finding is that AI makes our life easier. life better every single day. I know that traffic systems in the city that I’m in now use AI to look at traffic congestion and traffic patterning. And they actually time the lights to improve the throughput on particular roads at particular times a day. Now, that’s where AI is going to really benefit society. It’s going to benefit it in transportation, in medicine, in research. I’m not so worried about the data being used for evil.

I’m really excited about the data being used for good because that’s where I think we’re going to get the most benefit.

Audience

True. But what if AI get their own subconsciousness? They don’t need humans to just act.

Jigar Halani

I wish you see that day. Somebody told me when I started my journey with the phone. That’s what is going to happen. You will lose touch with your family. You will always be busy with the phone. And so I don’t think so. We have even touched that level as a surface, even after having this phone with me for 20 years.

Peter Panfil

Here’s the example I like to give. Do you think about breathing and blinking? No. You do it automatically. So let’s let AI take those autonomous functions and do them for you automatically so that you don’t have to think about them. And then if I don’t have to think about breathing and blinking, then all of a sudden I can use my brain matter to do other things. So many things. So. I look at it as it’s going to free us. It’s going to free us from the mundane tasks that breathing and blinking. Come on, you’re laughing at me. But do you think about breathing? No. You only think about breathing when you’re trying to hold your breath.

Okay? So I think what’s going to happen is AI is going to become to us like breathing and blinking. It’s going to become an autonomous function that just runs in the background of our lives constantly and makes it better. It’s going to learn what we do and how we do it and how to improve that performance and give us more freedom to do what we really should be doing, and that is making the world better.

Audience

Thank you.

Peter Panfil

Thank you. That’s a good question. I’m glad you asked that question. We have one more. We have one more? Yeah. Hey, hi. We’re going to that side. Hello. Big one.

Audience

This is Shlom. I was watching the interview of Mr. Jensen Hong from NVIDIA, and he explained AI as a five -player stack, you know, energy -cheap infrastructure model and application. Which layer do you think, he also explained how US and China are working on different layer and how they are, you know, ahead of us many years in different layers. Which layer do you think India can excel with them or match with them in upcoming years?

Jigar Halani

So, I think we are already doing that, right? It’s a great question. When we talk about sovereignty, these are the layers we should be sovereign, essentially, right? We cannot be importing energy from anybody. We need to generate by ourselves. Otherwise, how will we run these lights and so many functions and how will we power these data centers, right? So, the good news and I think Prime Minister gave this answer so nicely yesterday in his keynote. He explained, no, the Minister said about it, sorry. He explained these five layer cake once again and I am proud to say he made that statement which we all know it. Half of our energy today is generated which is a green energy, right?

So, that layer is sorted. And I and you have a lesson to learn. We have to contribute more of the companies who have to contribute by having solar, hydro and air and other methods, right? Where NVIDIA is trying to contribute to the nation today is on the top three layers, right? We are helping the nation with AI factories. building it with all the learnings what Peter also mentioned. You don’t have to learn from all our mistakes of last 18 months that we have undergone in other regions because they were ahead. India was slightly delayed by at least 12 months or so. But we have put in all those learnings and the factories have come up way too faster than anywhere else in the world, right?

By all means. The second layer is one of the layer is the serving layer when you build these applications. How do you do inferencing? You’ll be surprised to hear Indian cloud providers never had a control plane, right? We were dependent on other nations to give us a control plane to run these cloud inferencing stack. NVIDIA has open sourced that work and shared that with government of India and that was the announcement that Sarvam did with the product named as how we call it Prava, if I’m not mistaken. I hope I’m pronouncing it correctly. And that layer is now completely owned by government of India and an Indian company to do the entire inferencing locally. right?

And the last piece, which is the application, right? I’m sure you would have visited the booth downstairs on the Hall 5. I don’t think that we have left any booth. Every booth is powered by NVIDIA open source stack that we have given it to build the agent -DIG AI platforms and formulation models. That’s the contribution we have done it for the nation. And India is right there. I think what’s missing, and I will fully agree with that, is we are missing with our own chips, right? And that’s the autonomy that every country is trying to drive across. I’m again proud to say that NVIDIA is fabulous. We don’t produce, right? We outsource that to Taiwan and a few other countries, essentially, right?

We have opened up partnerships in many countries, and we are very open to partner with India as well to give away our technology. Thank you. We will do the modifications and do the manufacturing by themselves. That’s the last piece which is left, and I’m very confident with this Semicon mission. this is going to happen very soon, even if we NVIDIA with somebody else.

Audience

Thank you so much. The past year time, we’ll have to get into our next session. 10 megawatt, 12 megawatt, and today we have…

Peter Panfil

Gigawatts. Gigawatts, baby. Gigawatts. Gigawatts.

Audience

I just wanted to leave important information. It took about 8 years time to build one 5 gigawatt. And another 10 gigawatts is going to happen next year. So look at the speed and scale. We both have to work together. And as Jigar rightly mentioned, all 5 layers will have a tremendous opportunity to work. Energy, infrastructure, compute, models, application, and so on and so forth. Huge amount of resources required. Huge amount of support required. And very exciting time ahead. Thank you so much.

Peter Panfil

And it’s going to be a system approach. System. Systems. Think systems. we as an industry have thought boxes for too long. We think, I got this compute box or that compute box. It’s now a system. It’s a platform. And that platform generates tokens. The new measure should be tokens for watt per dollar.

Jigar Halani

Absolutely. Absolutely. Very well said. Thank you so much.

Moderator

He’s one of the guiding principles to implement a lot of large -scale data centers for Vertiv or all the entire ecosystems. Let me welcome Srikanth on the stage. A good round of applause for Srikanth. And another gentleman we have from Vertiv. He’s about 35 years of experience in leadership roles in Europe, Middle East, Africa, India, Southeast Asia, Asia, you name the region. He’s been there for many years. His name is Sanjay Sainani. He joined us as a senior vice president, technical business development. He’s the one strategizing all technical strategies for Vertiv and a business development area. Let me welcome Sanjay on the stage. A good round of applause for Sanjay. And I’ll be asking some questions on behalf of you.

I would also open the floor maybe sometime later. Welcome. Yeah, am I audible? Okay, so let me start, Srikanth, from you. Last question first. First, what is the one learning you want to give it to the audience from your experience of implementation when you build a large AI -scale factories? That was my last question, but I want to ask you first. One piece of advice or experience? Experience, out of your experience, because you already have good hands -on on implementation. So from a sustainability standpoint, implementation standpoint, what is one learning you want to give it to us in India when we’re building a scale of the factories and things like that?

Srikanth Cherukuri

Yeah, it’s an interesting question, right? Like when… One year back or one and a half year back, I came to India to review some data centers. And when I was asked to do that, one of the first things that crossed my mind is, wow, India is building data centers at scale? Because when we were growing up, power used to be a big issue. The reliability of the power used to be a big issue. The availability of power used to be an issue. And when I came here, I was amazed at how far, you know, I have been away from the ecosystem for a little bit, but I was amazed at how far things have come in terms of availability of power and the reliability of that power.

And the second thing I was amazed at is also just the knowledge here in the ecosystem as well as everything related to everything from safety to speed of light construction and the product ecosystem has come such a long way. I think the next step in terms of where India is going in this AI factory build -out is if you look at the U .S., it’s a little further ahead in terms of gigawatt scale and high -density racks, deploying high -density racks, high -density liquid -cooled racks. There’s a lot more experience over there. And I think our combined companies have created that experience. Like I’ve been working with WordUp for the last four to five years in the R &D work, engineering work, and then eventually the deployment work.

So now we have actually matured a lot in what we consider AI factories versus data centers. So there is a lot of advantage for India to draw from that experience. Our combined knowledge pool, again, it’s the same company. Whether you go to Europe or… US or India, it’s still Word of an NVIDIA. It has to be a strong cross -pollination between the ecosystem in the US and here, a strong knowledge sharing. And we are in year two or year three of this AI factory build -out worldwide. And as India is picking up pace in this journey, there’s a huge opportunity to not relearn all those hard lessons, or the hard way, but instead share that knowledge, share, you know, our combined teams share that knowledge and build it much faster here.

Moderator

That’s first, as a thought leader, both sides we need to do that, we need to equip the market for those kind of things. And let me also tell you, on Vertiv’s side, whatever innovations we are doing in the US, we are real -time bringing to India, so that there’s no latency here, and absolutely whatever is going to happen in the US, we want to bring it to India. That takes me to our next question to Sanjay. Sanjay, we have heard about speed, and so far we have heard about speed of clock. Now, Peter, sometime back, and Jigar spoke about speed at scale. what is your thought process about speed at scale or ramp up of infrastructure happening at the speed level what’s your thought process

Sanjay Kumar Sainani

I mean most of us who are in the space of mission critical applications and then within IT and if you’re dealing with semiconductors we all know Moore’s law and that was pretty much a 10x almost a couple of years in terms of performance and while performance was 10x the energy required to reach that performance was probably 2 -2 .5x every generation so you were getting amazing efficiency in terms of performance because you were getting a 10x performance with 2 -2 .5x kind of additional energy usage and that’s what you saw for the past many many decades and we all thought that Moore’s law is kind of now reached a plateau there’s not much happening … and this is where companies like NVIDIA, working with other semiconductor ecosystem, came up with multi -tiered chip structures.

When you look at today, some of the chipsets, these are three -story, four -story, six -story buildings. If you had to look under a microscope, there are layers and layers of transistors, billions of transistors layered together. And the innovation that is happening now or that has kick -started now is again kind of retracing Moore’s law. So if you look at what NVIDIA is announcing in terms of the new generation of chipset, there’s a humongous amount of performance improvement every generation. While the performance generation is 10x, 20x, 50x, the energy consumption is also jumping up. It’s not 10x, but it’s 2x, it’s 2 .5x. So like Jigar and Peter mentioned a little while ago, you have the current generation.

The current generation at 130, 140 kilowatt per cabinet, while the next one is 250, 260. and the one down the road is 400, 500 kilowatt per rack. And while I don’t want to give away a bit too much, but one megawatt rack is not too far away. People are already testing it. So now think about it, one megawatt of rack. A few years ago, the whole data center was one megawatt. The white space would have 200 racks of five kilowatt each, and you had generators, chillers, transformers, facilities supporting that one megawatt. So the white space was 80 % of your footprint. The rest of this stuff was 20 -30 % of your footprint. Now this has flipped. You have only one cabinet. But you still need all of that.

You still need one megawatt worth of power, generators, chillers, transformers, everything. So in that context, if you see, we are innovating at tremendous speed. If you invest anything, today it’s outdated two years down the road. So that’s number one. That’s a challenge. The second challenge is that it costs a lot of money. Jigar mentioned the cost of a data center may be a billion dollars, or let’s make the numbers a bit more reasonable, $100 million. But the amount of GPUs sitting inside is probably $2 billion worth of GPUs. So now if I place an order today with $2 billion of GPUs, I want to monetize this project very, very quickly. If you build a project, and in olden days we used to build a home in India, not just in India, in most other parts of the not -so -developed or developing countries, would have people carrying bricks on their heads and building a house.

It takes two years to build a home. Now as a homeowner, you don’t see that as a problem. You’re trying to save $5 a year and $2 a year. You’d rather have a person taking a brick on the head rather than bringing a cement mixer because you thought you were saving money. In this world, you’re losing money. because the money you are spending is still going to be the same, probably 10 % cheaper, but your return will start after two years because you will monetize that investment after two years, after three years, because only when you turn on the switch, only when you turn on the tokens, that’s when you make money on your investment. So it’s speed to token.

Whether you spend $100 million or $1 billion, you need to spend it fast, get the factory up and running very fast, so that the token comes out very fast, so that you can get your return on your capital employed. So if you are anyone here who is from the finance industry, Rocky, return on capital employed is a seriously important KPI for money. So that’s speed. And the third is scale. The demand is so heavy. Jigar and Peter in their conversation talked about a few kind of areas where they have high applications. Think of agentic AI as what it can do for you and in how many areas of your daily life it can affect you. The scales are crazy.

And so not only we need to work on the degree of difficulty in terms of density, we need to deploy it tomorrow morning and we want to deploy it at massive amount of scale. And that’s the kind of problem statement or opportunity that we have.

Moderator

So Sanjay, when you say speed at scale and that’s an idea which you have given because every month or a week save to deployment is going to be a go -to -market fast, right? And generally when you have to speed at scale, you also have to design for scale. And that’s where the blueprint discussion starts. Now why, Srikant, when it is a blueprint, it starts from a GPU architecture or GPU cluster architecture. What is your thought process? When you say… Scale for design, you not have to scale for it first which GPU you want to go with today and then you scale for that. What’s your thought process when you talk about why the GPU has to start with the… Why the blueprint of any data center has to start with a GPU cluster?

Srikanth Cherukuri

Could you repeat the last part again?

Moderator

Okay, when I say when we have to speed at scale, we have to design for the scale. And that’s where the blueprint of GPU starts. GPU is the first thing we need to start with. And why is that?

Srikanth Cherukuri

Yeah, I think there’s a couple of things, right? When we first started designing, you know, the early phase of AI factories, we were relying on, you know, general purpose built data centers. And we were changing them rapidly into what… They weren’t even really AI factories, but we were trying to figure out how to make it work, right? It was not designed at scale. It wasn’t designed for… It was not purpose built designs. But I think the moment came on us so quickly. And again, NVIDIA and WordUp together foresaw that moment. We didn’t foresee the scale. We foresaw the moment. And we went… We went from very quickly from 10 megawatts to… Now we’re talking about gigawatts.

And so infrastructure doesn’t move at that speed. Infrastructure moves, you know, the design can move at that speed, but someone has to actually build out the AI factory. Someone has to build out the data centers. We have to make so many CDUs. So we were in a phase where we made it work, but we made it work in a very, we had to make it work way, right? If we had to do it all over again, that’s not how we would do it. So now we have a moment where we say, okay, if we were to do it the right way, now we know what the future looks like. We know, that’s why we’ve redefined the data center as an AI factory, which is a fully integrated, you know, where you go from a chip design to system design to the liquid cooling design or the power design.

It’s all, in fact, even the shell and the campus is all purpose -built as an AI factory. So we have to start thinking both in terms of design as well as manufacturing, as well as delivery, as well as operation. We have to think and start thinking about it at that scale. and I think we’ve already started doing that at the design. Like, you know, NVIDIA has a DSX reference design now, which is actually based out of word of, you know, smart run products and large -scale CDUs. So now we have to start deploying it that scale. That is one of the things that NBIS’s focus is, is how do we deploy it speed of light.

Everything from logistics to operations, everything is being redefined. So that’s why we have to, like, you know, you have to think of it as an end -to -end integrative product.

Moderator

So you say about we have to design for the future. That means every design what we do has to have a future proof. What are two important ingredients you want to suggest to our audience or all of us when you talk about future proof from a design standpoint?

Srikanth Cherukuri

Yeah, I think the biggest one that I still have to repeat sometimes because it hasn’t caught on is we used to think of, I mean, Jigar and others, have spoken so much about rack density. we have to stop thinking about rack density we have to start thinking about row and data hole level density because how we almost are slowly retrofitting the entire footprint to match an AI factor design we will not be doing that generation to generation that’s just very expensive if we keep changing the technology it’s going to be very expensive for you’re not only spending a lot on building it you’re spending a lot on retrofitting it we don’t want that because that’s going to eat into the ROI so we have to start thinking about I’m at 30 today, I’m going to do this 40 tomorrow, I’m going to do this 100 tomorrow, I have to do something else 200 or 1 megawatt, I have to do something completely different we have to stop that mindset we have to start thinking about it in bounding boxes data hole level or row level bounding boxes and that’s what our latest reference designs do which is start looking at the entire pod as one big block don’t change the technology optimize it with a future proof mindset, right, will this work for that one megawatt rack and today with the digital twins you don’t need to actually build it to do it, you can actually simulate that so that’s number one I would say is take those bounding boxes and take that bounding box mentality now map that technology wise map that right up from the chip to the utility which is same redundancy this redundancy for compute this redundancy for network and have that cluster mindset where you say you map the cluster to the power and thermal perfectly so that every watt goes into maximizing tokens versus going into redundancy and your old school way of thinking so I think if you combine both of those elements you would get into a future proof data center again it’s the hyperscalers have have mastered over the last 10 -15 years.

Again, we pretend like AI is the first time we’re doing infrastructure build -out, but it’s not. The hyperscalers have been doing since the late 2000s, right? So they have mastered the concept of a reference design, a global reference design, where you once lock in that design, you generation -wise you stay in consistency. You build a template and you just feed it out.

Moderator

I would like to ask the same question to you, Sanjay. From your perspective, what are two things you would like to offer from a design standpoint, infrastructure standpoint, when you want to give a future -proof design for at least for two or three generations, which Peter spoke about?

Sanjay Kumar Sainani

I think whether we like it or not, the speed of change in the semiconductor IT AI world is very different than the speed of change on the physical world in terms of power and cooling. And even the life cycles and depreciation cycles are very different. So, for example, compute storage or, you know, in the IT world is depreciated every three to five years because that’s the pace of evolution. generators, chillers, transformers, UPS batteries are deprecated 10 to 15 year cycle. So you got to figure out a way how do you run 2 to 3 cycles of IT within one cycle of infrastructure. This is a requirement. If you don’t do that to the point that was just made, Srikanth made that you would be keeping on investing and that’s not good business at all.

So now how do you do that? And in the cloud world again, we mastered that to the sense that in very simple English, how are we doing it today in the cloud space? We have a 30 megawatt data center. We have 2 to 3 to 4, 5 megawatt per data hall. Then we don’t worry about what’s inside the data hall. How does it matter? I have a 5 megawatt power, 5 megawatt cooling capacity. Bring whatever you want as long as it’s 5 megawatt, you’re good to run. The only thing that you are probably retrofitting, if at all you have a generational change, is the final mile of cable or connectors. Now, that becomes a slightly more complicated in the AI world because your densities are much higher while providing power is relatively easy.

Pumping a lot of air or now pumping a lot of liquid is not as simple. There’s much more piping happening. In fact, I joke with the people, the future is of electricians and plumbers, believe me. There’s so much plumbing now in a data center, you will need plumbers in the data center. So, the only way to do it is to again look at what was mentioned in the previous discussion also, is look at certain capacity pods, 2 .4 megawatt pod, 6 megawatt pod. So, now you have a pod. It fits certain number of GPUs of today’s generation. It has certain power capability and liquid capability and it’s done. All the upstream to that in terms of transomers, generators, utility connections is designed for 6 .2, 6 .4, whatever the case.

Now let’s say over the next three years, generations change. Well, all you have to do is reconfigure the cabinets, nothing else, everything else stays the same. Precisely what we are doing in the cloud world. It took us a couple of years to figure this out because this was all being done for the first time. But now this will definitely be the way to go going forward.

Moderator

So, Sanjay, let me bring to a very different topic now on energy efficiency. When we are talking about gigawatt scale, energy conservation or energy saving is the most important piece. Now, we as a country are a tropical, right? We have a temperature right from 10 degree to 48 degree. So in such a span, what do you think the right approach to improve the PUE? Okay, maybe water usage or what are the important best practices you would like to suggest to the market when it comes to saving energy efficiency or improvising the PUE? Of course, because of liquid adoption, it is anyway has scaled down to an extent of what was there for the normal. But what would be the next stage of best practices you would like to suggest through your experience?

Sanjay Kumar Sainani

I think the word PUE is, I don’t know if this is the right word, but probably a very abused word in the industry. It’s used so commonly, thrown out there so easily that everyone believes, well, I have a lower PUE. Well, first of all, I can give you better PUE without doing anything. I can increase the air temperature. Suddenly your PUE is much better. You think your PUE is better, but right now your computer fans, your server fans will speed up. The temperature is more. They need to throw more. So the IT load increases. but you increase the temperature so your electrical load reduces, I mean your cooling load reduces, you suddenly have a better calculation.

But in reality, your total power increase, which you don’t realize, so the PUE is better. So PUE is a bit of a, you know, thrown out word, but here is how I look at this. I think the PUE in the data hall in the white space, irrespective where you build it, is the same. Because I need liquid at a certain temperature, I need air at a certain temperature, it needs to enter the rack. The rack is doing what it is doing, it doesn’t matter whether you build in Mumbai, Singapore, I live in Dubai, you build in Timbuktu, it’s exactly the same. The question is, how do you throw the heat out? Because now that depends on the environment outside.

So are you in Singapore? Rains all the time. Are you in Iceland? It’s never more than 20 degrees any time of the year. or are you in, you know, Dubai where it reaches 52 degrees in summer. At least that’s what we designed for 52 degrees. And that’s where the different technologies need to be adopted. Now, whether it is, you know, air -cooled chillers, whether in some markets you can have, you know, water -cooled chillers. One of the unique solution sets that we have started to see is that, especially in India, you have our cities and the way we are located in between the latitudes, our thermal variation or our temperature variation during the year is different. We have very hot in the summer and we have reasonably good weather in the winter.

So there are some entitlements that you can get in the winter. So, for example, we can use chiller technologies where during the winter months we are able to use a bit more free cooling. And in the summer months or during demand months, we add a bit more of, you know, chillers. Chiller technologies, I mean DX technologies, comparator elements that come in and help us to add that extra cooling factor when required. and so what we could do is optimize the way we cool across the thermal cycles of the year and bring down the annual PUEs of the year because at the highest point of temperature you will need that cooling whether you like it or not and so it’s this management of PUE through thermal cycles and some optimization also through load cycles because load also especially in the AI world may not be like a cloud business uniform throughout the year, throughout the day, through every month and so again certain optimizations in how you use your CDUs or fan wall units to bring that energy down will help us to improve the PUE.

Srikanth Cherukuri

One thing I would say about that is the design is there, right? Based on whether it’s the water temperatures, we’re all designing to the same targets. The design is there. Where it becomes extremely manual is again we’re still, in the traditional mode of operation in data centers where we have a large control room and we are optimizing for… for uptime and safety, and safety in the sense there’s no risk of downtime. We’re very risk -averse. But we haven’t, even if we have to do what Sanjay just suggested, which is optimize that, there is no automated way of doing that because the chip -level telemetry doesn’t talk to the data center -level telemetry. And that’s what NVIDIA’s reference design is looking to change today, is, again, if you were to retrofit a brownfield facility, this will be harder.

But if you were to build a purpose, of course, this is an opportunity for India, if you’re building an AI factory today, there is no reason why you can’t integrate telemetry from chip to chip. There’s no reason why you cannot simulate how to optimize that and simulate a traditional sample workload and see how you save energy. I’m sure that simulation will tell you that you’ll save a ton of energy without any human intervention.

Moderator

You spoke about retrofit. So there have been normal cloud services or normal workloads have been working, let’s say about 5, 10, 15, 15 kilowatt of load. what do you see when it comes to AI augmentation or a GP augmentation in a same platform or a same aisle how the retrofit will be easy or difficult or what could be your one or two tips to do that like if you are talking about AI optimization for telemetry specifically there is already an existing workload which is working with a very small medium to small densities but in that row you want to put a GPU or a liquid could GPU or air GPU which means you are retrofitting some amount of passive infrastructure how difficult or easy would be that actually?

Srikanth Cherukuri

I think again if you go back to that journey even the design and the retrofit was extremely commercial even today I think at enterprise level it is extremely difficult if I was an enterprise CTO looking to deploy AI compute I might actually and I look at our experience in the last one year I might actually be a little you’re looking at a very cumbersome everywhere from design to following local regulations for the high power and liquid cooling having secondary loop built out that’s going to be that could be pretty scary at the end of the day but I think what Verte was doing for example with smart runs you know fully integrated mechanical electrical system that can be purpose built for any pod size that can track our you know our most scalable reference designs I think that would be the way to go right like that’s the importance why even Jigar mentioned that you know our following our reference design as closely as possible all these innovative designs and offerings will improve the adaptability part for the future change is what I can say

Moderator

my last question to you the future of the the future of the the future of the there seems to have some NVIDIA -ready design offerings or certification offerings. Would you like to say, talk, or would you like to give some insight about that? Certification programs of NVIDIA -ready data centers. But NVIDIA -ready designs.

Srikanth Cherukuri

Yeah, I think whether it’s a colo or whether it’s at a cloud scale, an NCP scale, what we’ve been doing from the beginning is, you know, just like we’ve been enabling other partners, we’ve been enabling a lot of colo partners to build NVIDIA -ready data centers. Okay. And that optimizes for, you know, the water temperatures that we’re recommending, the port sizes that we’re recommending, the redundancy that we’re recommending, the integration between telemetry that we’re recommending. So for the partners that have followed that design, we have, you know, whether it’s DGX -ready or NVIDIA -ready. Now, the only thing I would encourage these partners and also those who are looking forward to this vertical, who is actually doing that?

at speed of light, in a sense. Like, you know, a lot of the data center industry is still in the mode of, you know, they’re thinking more like real estate developers, you know, waiting for, for example, you know, you have these tranches of data centers that you’re purpose -building for everyone. That is a traditional way of thinking and saying, I’m giving this space, this cage to you, and I’m going to build it out the way you want it. But you can’t wait. Like, the way the industry is operating, no one can wait for that, right? So the partners who are building purpose -built AI factories, they are part of, or want to be part of that future, building at large scale, and then whether they give those tranches or not, but they’re built on NVIDIA design, so when the customer comes, you already have built basically according to the specs.

Moderator

That’s really insightful. Many of our Colo customers will take good insight from that. With this, I would come to an audience for any other question for them.

Audience

Hi, I’m Dal Bhanushali. Thanks for the talks, this one and the previous one. We have been talking about how we will scale India in the future. We also need to scale the talent. I wanted to get some viewpoints from you, from your experiences as we double capacities. You also need those people to run the data centers. We need DC ops as special. We can run the NVIDIA optimized containers in our laptops, but those word -to -you chillers, those skills are not common and cannot be easily taught in schools today. So what’s the plan? How do you think we should be going in the future? Especially double every year. is a huge challenge, right?

Moderator

So I’ll just take this question for a while. So at Vati, we realized this challenge much ahead of time, and we started with a lot of skill development program. So the first thing first is about operation and management of the infrastructure, okay? That’s something which we have started with in collaboration with Indian Institute of Technology, Chennai, where Diploma and BTEC are graduate engineers. We train them for managing how to manage operation and maintenance of data centers. That’s about eight to 12 weeks program, extensive programs, off -site as well as on -site. So this is one part. And there are many other programs which are on the cards to develop design, engineering, and many other things, actually.

That’s what I can tell. And these programs are already available on the web. Anybody can have a look and enroll for that, okay? Any other thing which maybe anybody would like to say about skill development or any other thing which maybe anybody would like to say about skill development or any other development activity which NVIDIA would like to do with the need of ours when we are scaling so high?

Srikanth Cherukuri

I think that’s a question you also want to have. Could you repeat the last part of the question, if you don’t mind?

Moderator

So he’s asking about how the scale is going up. There’s a lot of resources required, and the skill development is also a big challenge. So while NVIDIA is taking care of an operation and management piece, we are developing a lot of people through colleges and engineering institutions. What are the initiatives NVIDIA also…

Sanjay Kumar Sainani

managing how to manage operation and maintenance of data centers. That’s about 8 to 12 weeks program, extensive programs, off -site as well as on -site. So this is one part. And there are many other programs which are on the cards to develop design, engineering, and many other things actually. That’s what I can tell. And these programs are already available on the web. Anybody can have a look and enroll for that. Any other thing which maybe NVIDIA would like to say about skill development activity which NVIDIA would like to do with the need of ours when we are scaling so high? I think that’s a question you also want to have.

Srikanth Cherukuri

Could you repeat the last part of the question if you don’t mind?

Moderator

So he’s talking about scale is going up. There’s a lot of resources required. And the skill development is also a big challenge. So while NVIDIA is taking care of an operation and management piece, we are developing a lot of people through colleges and engineering institutions. What are the initiatives NVIDIA is also taking to develop the skills within the ecosystem?

Srikanth Cherukuri

Yeah. I think a couple of things I would say. One is, you know, as you keep going up on the scale, the prefab systems that Vertebra is developing are going to be absolutely critical because, like, when I was talking about the enterprise -level difficulties right now, all that can be solved with. But it’s, you know, a lot of times you’re waiting for the data center, you’re waiting for the data hole to get ready before you can deploy the compute systems. Yeah. And each of them have dependencies on each other that all are centered around that space, right? When you’re doing off -site prefab integration, you’re doing prefab, you know, manufacturing, you can do that all in parallel.

You can do that all at scale in parallel and then bring it all into one place. And in the meantime, you could do the testing off on the factory. A lot of the testing is done today in the data hall. So you could avoid all that, move it all to the left by bringing it all outside of the data hall and then bring it all into the data hall once the data hall is ready, once the shell is built up, and you could really condense that build -out.

Sanjay Kumar Sainani

Srikant, as you say it rightly, as we are taking a lot of activity, it is supposed to happen on -site, taking to off -site, which by means of pre -engineering it, developing and building at the scale and getting deployed at the site. So that’s a way forward. Any more questions? Otherwise, we can hold it here.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Sovereignty and innovation must run together, not as competing choices, with Google building data centers in India while providing indigenous solutions for critical data.”

The knowledge base states that “Sovereignty and innovation must run together, not as competing choices, with Google building data centers in India while providing indigenous solutions for critical data” [S1].

Confirmedhigh

“Google announced new data‑centre capacity in Vizag, which will keep innovation and any data‑residency things within the boundaries of India.”

Google’s planned ₹80,000-crore hyperscale campus in Visakhapatnam (Vizag) is documented as a key data-centre investment aimed at AI and data-residency in India [S85].

Confirmedhigh

“Google has made JEE Main exams, mock exams … available on Gemini free of cost for any student to try.”

Google’s Gemini platform now includes full-length JEE practice tests and mock exams that are freely accessible to students, confirming the rollout of AI-powered JEE preparation tools [S75].

Additional Contextmedium

“India’s ambition to become a global AI hub must rest on complete AI sovereignty and will happen in a few months, not years.”

The knowledge base notes that sovereignty is a contested concept in India, with concerns about isolation and the need for balanced approaches, suggesting that the timeline “few months” is not universally accepted [S59].

Additional Contextmedium

“Google would address the digital divide and serve under‑privileged and rural populations.”

Google’s “Internet Saathi” initiative, which provides internet access to rural women through community networks, illustrates the company’s efforts to bridge the digital divide in underserved Indian regions [S93].

Additional Contextlow

“Google has a history of inclusivity across its products, empowering billions of users.”

A knowledge-base excerpt highlights Google’s longstanding focus on inclusivity through services like Gmail and Search, reinforcing its claim of empowering billions [S5].

External Sources (97)
S1
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S2
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 5- Sudhakar Gandhey, Former Senior Director at American Express Bank, built Access Cadets Technologies …
S3
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -David Freed- Role/Title: Corporate Vice President and leader of LAM Research’s advanced analytical and simulation softw…
S4
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Sanjay Kumar Sainani- Senior Vice President, Technical Business Development at Vertiv, 35+ years experience in leadersh…
S5
https://dig.watch/event/india-ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — He’s one of the guiding principles to implement a lot of large -scale data centers for Vertiv or all the entire ecosyste…
S6
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S8
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Srirang Deshpande- Part of strategy for India, managing Vertiv strategy and market development
S9
Connecting the Unconnected in the field of Education Excellence, Cyber Security & Rural Solutions and Women Empowerment in ICT — – **Ninad S. Deshpande** – Ambassador and Deputy Permanent Representative of India to the WTO in Geneva
S10
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S11
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S12
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S13
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Akanksha Swarup- Moderator/Host conducting interviews and panel discussions
S14
Akanksha Singh — Singh, A. (2020). Indian Perspectives on the ‘Responsibility to Protect’.International  Studies, 57(3): 296-316. (ISSN: …
S15
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S16
IGF Retrospective – Past, Present, and Future — – **Nitin Desai** – Role/Title: Former MAG chair (approximately 5 years), chaired the working group on Internet governan…
S17
From KW to GW Scaling the Infrastructure of the Global AI Economy — Google’s Nitin Gupta reinforced this collaborative approach to sovereignty, emphasising that “sovereignty and innovation…
S19
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Jigar Halani- Nitin Gupta – Peter Panfil- Jigar Halani- Sanjay Kumar Sainani
S20
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S21
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S22
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S23
From KW to GW Scaling the Infrastructure of the Global AI Economy — To discuss this, I have two friends, two industry veterans from Vertiv and NVIDIA to discuss the Fireside Chat. So we ha…
S24
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Audience- Moderator- Srikanth Cherukuri – Peter Panfil- Sanjay Kumar Sainani- Srikanth Cherukuri – Srirang Deshpande…
S25
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “Do you think AI Summit has been successful?”[68]. “But, in the next 3 -5 years, what are the main targets for India to …
S26
The Innovation Beneath AI: The US-India Partnership powering the AI Era — I agree with him. I think that the IBM analogies are very good. Very good one. I think we are all focused on the core an…
S27
From Innovation to Impact_ Bringing AI to the Public — If we don’t make for it, our all compounded historical knowledge will be lacking in the next generation. So instead of a…
S28
WS #270 Understanding digital exclusion in AI era — Moderator: Thank you so much, Rashad. I really liked the point when you talked about the human-centred approach when w…
S29
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Furthermore, the synthesis highlights the positive role of multi-sectoral collaboration in driving disability inclusion….
S30
What policy levers can bridge the AI divide? — ## Key Challenges and Opportunities Lacina Kone: Before talking about the bridging of AI, bridging the gap of the AI, t…
S31
US media executives call for legislation on AI content compensation — Media executives and academic experts testified before the Senate Judiciary Subcommittee on Privacy, Technology and the…
S32
Climate change and Technology implementation | IGF 2023 WS #570 — Speaker:Thank you, Millennium. I’m Sakura Takahashi from Japan. I’m speaking here today on behalf of Climate Youth Japan…
S33
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure — Majorannouncementsfrom Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI …
S34
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GP…
S35
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S36
Oversight of AI: Hearing of the US Senate Judiciary Subcommitee — So I could go into that more, but I want to flag that. Second is on jobs past performance history is not a guarantee of …
S37
Enhancing rather than replacing humanity with AI — A grandmother in Poland and her grandson, growing up in Dubai, sit together on a video call. She speaks only Polish, and…
S38
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi explained that the traditional concept of individual servers as computing units is becoming obsolete in AI ap…
S39
Part 7: ‘Converging realities: Embedding governance through digital twins’ — As digital and physical systems increasingly interact, they give rise to what we can call embedded realities, i.e.enviro…
S40
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S41
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S42
Designing Indias Digital Future AI at the Core 6G at the Edge — This comment connects technical sovereignty to cultural and ethical sovereignty, highlighting that AI systems trained on…
S43
How AI Is Transforming Indias Workforce for Global Competitivene — Artificial intelligence | Human rights and the ethical dimensions of the information society Policy, Governance, and In…
S44
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — “So I am happy to report that these seven sutras which initially we started as a recommendation or guiding principles fo…
S45
Lightning Talk #245 Advancing Equality and Inclusion in AI — The session will present measures that can be taken to operationalise safeguards and remedies against discrimination in …
S46
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad This comment quantifies the massive scale of energy transformation required for AI infrastructure…
S47
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Vivek highlights design choices that improve energy efficiency, such as liquid cooling and power‑aware circuits, achievi…
S48
From KW to GW Scaling the Infrastructure of the Global AI Economy — “We have a 30 megawatt data center”[61]. “I think the word PUE is, I don’t know if this is the right word, but probably …
S49
How to Project Europe’s Power / Davos 2025 — Mentions the inefficiency of planning energy investments at national levels rather than European level. Pouyanné calls …
S50
1.1 CHALLENGES IN ENVIRONMENTAL INNOVATION — Second, market actors can lack sufficient information about future prices and costs . For many companies and individuals…
S51
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Economic | Development Four-channel framework showing automation vs. complementation paths, with emphasis on right-hand…
S52
The Foundation of AI Democratizing Compute Data Infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S53
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Construction approaches overview The video contrasts conventional data‑center construction, which follows a sequential,…
S54
Prosperity Through Data Infrastructure — Data integration proves to be a complex task as there are often overlapping pieces of infrastructure that need to work t…
S55
[Tentative Translation] — Looking back at the Science, Technology, and Innovation Policy during the Fifth Basic Plan, digitalization, which is the…
S56
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Industry Perspectives: Systems Integration Challenges Anne Flanagan: Hello, apologies that I’m not there in person t…
S57
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Issues particularly evident in joint or cross-force environments where systems must function across organizational, nati…
S58
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad, Chairperson of Central Electricity Authority, outlined India’s energy readiness for AI infrastruc…
S59
Panel Discussion Data Sovereignty India AI Impact Summit — Both speakers emphasize that achieving data sovereignty requires collaborative efforts between government and private se…
S60
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet …
S61
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Prime Minister. It’s an honor to be here, and under your leadership, you have elevated technology from a sect…
S62
AI Meets Agriculture Building Food Security and Climate Resilien — This insight distinguishes AI deployment from traditional technology rollouts, emphasizing iterative improvement over pe…
S63
Open Forum #76 Digital Literacy As a Precondition for Achieving Universal a — Focus on inclusive access for underserved populations Policies should encourage inclusiveness focusing on rural access,…
S64
Digital divides & Inclusion — By improving connectivity and expanding access to the internet, more individuals will be able to bridge the digital divi…
S65
2015 — – The first group, consisting of Targets 2.1, 2.2 and 2.3 is concerned with the inclusion of particular development cate…
S66
WS #270 Understanding digital exclusion in AI era — The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technolo…
S67
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S68
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — – Hakikur Rahman- Ranojit Kumar Dutta Barriers to ICT employment include lack of advanced skills (46%) and poor interne…
S69
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S70
From KW to GW Scaling the Infrastructure of the Global AI Economy — Sovereignty and innovation must run together, not as competing choices, with Google building data centers in India while…
S71
WS #204 Closing Digital Divides by Universal Access Acceptance — ### Indigenous Rights and Data Sovereignty Steinhauer-Mozejko Phil: Thank you. I’m gonna try not to get distracted by s…
S72
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Both speakers acknowledge the challenge of making government data available for AI innovation while protecting sovereign…
S73
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Ankush highlights that only a small fraction of Indians speak English, making regional language models essential for mas…
S74
Pre 3: Exploring Frontier technologies for harnessing digital public good and advancing Digital Inclusion — Charlotte Gilmartin: Thank you very much. I’m just going to share my screen and show the slides. Because I only have fiv…
S75
AI learning tools grow in India with Gemini’s JEE preparation rollout — Google is expanding AI learning tools in India by adding full-lengthJoint Entrance Exam practice teststo Gemini, targeti…
S76
Lightning Talk #245 Advancing Equality and Inclusion in AI — The session will present measures that can be taken to operationalise safeguards and remedies against discrimination in …
S77
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — “So one of the key application, key product what we have developed is Fraud Pro”[41]. “We are able to today identify fra…
S78
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad This comment quantifies the massive scale of energy transformation required for AI infrastructure…
S79
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — This comment is particularly thought-provoking because it challenges conventional thinking about computing architecture….
S80
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Vivek highlights design choices that improve energy efficiency, such as liquid cooling and power‑aware circuits, achievi…
S81
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter model…
S82
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S83
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S84
Keynotes — O’Flaherty, paraphrasing Professor Bradford, calls for a fundamental shift in how the technology regulation debate is fr…
S85
‘AI City Vizag’ moves ahead with ₹80,000-crore Google hyperscale campus in India — Andhra Pradesh will sign an agreement with Google on Tuesday for a1-gigawatt hyperscale data centrein Visakhapatnam. Off…
S86
AI Innovation in the UK Advances with new Google initiatives — Google isintensifying its investmentin the UK’s AI sector, with plans to expand its data residency offerings and launch …
S87
Google invests $1.1 billion in Finnish data centre expansion for AI growth — Google hasrevealedplans to inject an additional $1.1 billion into its data centre campus expansion in Finland, emphasisi…
S88
€5.5bn Google plan expands German data centres, carbon-free power and skills programmes — Google willinvest €5.5bn in Germanyfrom 2026 to 2029, adding a Dietzenbach data centre and expanding its Hanau facility….
S89
Private AI Compute by Google blends cloud power with on-device privacy — Googleintroduced Private AI Compute,a cloud platform that combines the power of Gemini with on-device privacy. It delive…
S90
Introducing Gemini, Google’s response to ChatGPT — Google`s Alphabet introduces Gemini,its state-of-the-art AI model adept at handling various data formats such as video, …
S91
Google launches Gemini Live and Pro/Ultra AI tiers at I/O 2025 — At Google I/O 2025, the company unveiledsignificant updates to its Gemini AI assistant, expanding its features, integrat…
S92
Google expands Gemini with real-time AI features — Googlehas begunrolling out real-time AI features for its Gemini system, allowing it to analyse smartphone screens and ca…
S93
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150 — In order to expand high-speed internet connectivity in remote or inaccessible areas, the study suggests exploring innova…
S94
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — The digital divide refers to the gap between those who have access to digital technology and those who do not. This divi…
S95
Sangeet Paul Choudary — Drivers also organize themselves to outwit the platform’s algorithms. Qualitative research as well as anecdotal evidence…
S96
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — Jonathan Price outlined the stark mathematics: copper demand will double by 2035, but supply will fall 30% short. This c…
S97
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — Denise Leal: Perfect. Thank you. So when it comes to Indigenous data, there are a lot of questions that we have to …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ankush Sabharwal
1 argument172 words per minute287 words99 seconds
Argument 1
India as AI Hub & Purpose‑Driven Models
EXPLANATION
Ankush asserts that India will become a global hub for AI development, driven by its aspirational stance and willingness to adopt new technologies for societal and business benefit. He emphasizes that Bharat GPT follows a purpose‑and‑trust mantra, focusing on specific use‑cases and tailoring model size to enterprise needs.
EVIDENCE
He notes India’s aspiration to adopt AI for welfare and predicts rapid emergence of a hub within months [2]. He explains Bharat GPT’s tagline “AI with purpose and trust”, the habit of beginning with the end in mind, selecting model size based on the problem, and partnering with domain experts to train models on relevant data rather than building generic large language models [28-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s ambition to become a global AI hub, with Bangalore highlighted as a focal point, is discussed in [S25]; the emphasis on purpose-driven, retrained models for trust and bias mitigation aligns with observations in [S27]; large AI investments by major tech firms further reinforce the hub narrative [S33].
MAJOR DISCUSSION POINT
Positioning India as a purpose‑driven AI leader
N
Nitin Gupta
1 argument140 words per minute366 words156 seconds
Argument 1
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box
EXPLANATION
Nitin argues that AI sovereignty and innovation are not mutually exclusive; both must progress together. Google is addressing sovereignty by building large data centres in India and offering an on‑premise “Data Box” that provides full Google Gemini AI capabilities while keeping data within the customer’s premises.
EVIDENCE
He describes Google’s new Vizag data centres that keep data residency within India while serving all personas [8]. He then details the indigenous Data Box that runs Google AI services on-premise, giving customers the power of a Google data centre inside their own facilities, including hardware control [10-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Google’s approach of building sovereign data centres in India and offering on-premise AI boxes matches the collaborative sovereignty model described in [S1]; the focus on an indigenous GPU layer and startup participation is echoed in [S34]; broader industry investment supporting the balance of sovereignty and innovation is noted in [S33].
MAJOR DISCUSSION POINT
Balancing AI sovereignty with innovation through on‑premise solutions
AGREED WITH
Akanksha Swarup, Peter Panfil
DISAGREED WITH
Akanksha Swarup
A
Akanksha Swarup
1 argument166 words per minute321 words115 seconds
Argument 1
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption
EXPLANATION
Akanksha raises concerns about ensuring AI benefits reach under‑privileged and rural populations in India. She asks how Google plans to make AI tools accessible and inclusive, reflecting the Prime Minister’s emphasis on digital inclusion.
EVIDENCE
She frames the question by praising India’s AI story and then directly asks about infrastructure, resources, and inclusivity, citing the Prime Minister’s concern and requesting concrete steps to bridge the divide [4-6][49-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a human-centred, inclusive AI design to avoid digital exclusion is highlighted in [S28]; AI’s role in improving accessibility for persons with disabilities and fostering multi-sector collaboration is covered in [S29]; policy levers to bridge the AI divide, especially broadband access, are discussed in [S30].
MAJOR DISCUSSION POINT
Ensuring AI reaches marginalized communities
AGREED WITH
Nitin Gupta, Peter Panfil
DISAGREED WITH
Nitin Gupta
S
Sudeesh VC Nambiar
2 arguments136 words per minute205 words90 seconds
Argument 1
AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing
EXPLANATION
Sudeesh explains that the railway ticketing platform faces huge demand‑supply mismatches, especially during Tatkal booking windows, leading to bot abuse. Advanced AI solutions are deployed to detect and curb automated misuse, helping balance demand and supply.
EVIDENCE
He describes the peak booking times (8 am, 10 am, 11 am) and the resulting demand-supply mismatch, noting that bots are used to exploit the system and that a “cat-and-mouse” game ensues, which is being addressed with AI [16-19].
MAJOR DISCUSSION POINT
Using AI to protect high‑traffic public services
AGREED WITH
Nitin Gupta, Jigar Halani
Argument 2
Indigenous Layer and Startup Collaboration Strengthen the Solution
EXPLANATION
Sudeesh highlights that the AI solution includes an indigenous component and leverages collaborations with Indian startups for data analysis and real‑time monitoring. This hybrid approach combines global technology strength with local expertise.
EVIDENCE
He confirms the presence of an indigenous layer, mentions a startup that continuously analyses social-media signals and collaborates with the global tech provider, and notes that the model continuously learns to mitigate automated attacks [21-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration between Indian startups and global technology providers, together with an indigenous AI layer, is described in [S1] and reinforced by the explicit mention of an indigenous component in [S5]; the broader sovereign AI initiative that includes startup GPU contributions aligns with [S34].
MAJOR DISCUSSION POINT
Localizing AI through indigenous layers and startup partnerships
AGREED WITH
Nitin Gupta, Jigar Halani
P
Peter Panfil
2 arguments139 words per minute2977 words1275 seconds
Argument 1
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid
EXPLANATION
Peter stresses that building AI capacity demands rapid deployment of compute at scale. He advocates beginning design from the GPU chip level, creating modular pods that can be replicated, rather than starting from power‑grid considerations.
EVIDENCE
He outlines three pillars-speed at scale, moving from chip to grid, and sustainability-explaining that faster GPU pod deployment accelerates token generation, and that starting at the chip defines the most efficient infrastructure [106-121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The chip-first, GPU-centric design philosophy for rapid AI deployment is advocated in [S1]; the concept of AI pods as the fundamental compute unit supports this view in [S38]; a system-level perspective emphasizing modular compute boxes is also noted in [S5].
MAJOR DISCUSSION POINT
Prioritising chip‑first design for rapid AI factory rollout
AGREED WITH
Jigar Halani, Srikanth Cherukuri, Sanjay Kumar Sainani
DISAGREED WITH
Srikanth Cherukuri
Argument 2
AI Will Become an Autonomous Background Function (like breathing) that Frees Human Capacity
EXPLANATION
In response to audience curiosity, Peter likens future AI to automatic bodily functions such as breathing and blinking, suggesting AI will handle mundane tasks autonomously, freeing humans to focus on higher‑order work.
EVIDENCE
He uses the breathing/blinking analogy, stating AI will run in the background, learn user habits, and improve performance, thereby liberating human mental capacity for productive activities [445-458].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion of AI creating new job categories and freeing human capacity aligns with insights on emerging professions in [S36]; the perspective of AI enhancing rather than replacing humanity is explored in [S37].
MAJOR DISCUSSION POINT
AI as an invisible, productivity‑enhancing layer
AGREED WITH
Akanksha Swarup, Nitin Gupta
J
Jigar Halani
1 argument170 words per minute3536 words1246 seconds
Argument 1
GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment
EXPLANATION
Jigar argues that AI infrastructure should be built around GPU‑centric pods, treating the pod as the fundamental building block. This approach enables future‑proofing, allowing easy upgrades across GPU generations while maintaining rapid deployment.
EVIDENCE
He describes the AI-factory concept, the need to think beyond individual racks to whole rows or pods, and cites reference designs that support multiple GPU generations within a single pod, enabling seamless upgrades [99-104][120-124][254-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to GPU-centric pods as the core building block for AI infrastructure is detailed in [S38]; pod-first design recommendations are also present in [S1]; a system-approach that treats compute boxes as platforms is discussed in [S5].
MAJOR DISCUSSION POINT
Pod‑centric, GPU‑first design for scalable AI infrastructure
AGREED WITH
Peter Panfil, Srikanth Cherukuri, Sanjay Kumar Sainani
S
Srirang Deshpande
1 argument124 words per minute274 words131 seconds
Argument 1
Transition from Outside‑In to Inside‑Out Data‑Center Design for Gigawatt‑Scale AI
EXPLANATION
Srirang outlines a shift in data‑center planning: moving from a traditional “outside‑in” approach (building infrastructure first) to an “inside‑out” model where workloads and GPU requirements drive the overall design, essential for gigawatt‑scale AI deployments.
EVIDENCE
He notes that earlier data centres were built from the outside in, but now the process starts with GPU or workload decisions, after which the full infrastructure is designed around them [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inside-out methodology, where GPU workloads drive overall data-center design, is outlined in [S1]; the evolution toward AI pods as the primary unit supports this shift in [S38]; a broader system-centric view is mentioned in [S5].
MAJOR DISCUSSION POINT
Reversing design methodology to centre AI workloads
S
Srikanth Cherukuri
3 arguments171 words per minute2202 words770 seconds
Argument 1
Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry
EXPLANATION
Srikanth proposes that AI‑center designs should move from rack‑level density to row‑ or data‑hall‑level planning, using digital twins and bounding‑box concepts to simulate and future‑proof deployments, and integrating chip‑level telemetry with data‑center controls for optimal operation.
EVIDENCE
He explains the need to think in terms of row-level density, using digital twins to simulate designs, and aligning chip-to-utility mapping for efficiency, while highlighting the lack of telemetry integration between chips and data-center systems [665-669][739-748].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of digital twins for AI-center design and simulation is described in [S39]; row-level density planning and pod-centric concepts are discussed in [S38]; the overall design methodology aligns with the inside-out approach in [S1].
MAJOR DISCUSSION POINT
Designing AI data centres with holistic, simulation‑driven, telemetry‑enabled approaches
AGREED WITH
Moderator, Sanjay Kumar Sainani
DISAGREED WITH
Peter Panfil
Argument 2
Integrating Chip‑Level Telemetry with Data‑Center Controls Enables Real‑Time Energy Optimisation
EXPLANATION
He emphasizes that connecting chip‑level performance data to data‑center management systems can automatically optimise energy use, reducing reliance on manual interventions and improving PUE.
EVIDENCE
He points out the current gap where chip telemetry does not talk to data-center telemetry, and describes NVIDIA’s reference design that aims to bridge this gap for real-time energy optimisation [739-748].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The current gap between chip telemetry and data-center management, and the need for integrated monitoring, is highlighted in [S1]; system-level integration concepts are also referenced in [S5].
MAJOR DISCUSSION POINT
Telemetry integration for energy efficiency
AGREED WITH
Sanjay Kumar Sainani
Argument 3
Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
EXPLANATION
Srikanth highlights Vertiv’s prefabricated, modular systems and associated training programmes as ways to accelerate AI‑factory deployment and address the shortage of skilled personnel.
EVIDENCE
He mentions Vertiv’s smart-run integrated mechanical-electrical pods, reference designs, and the importance of following these designs to improve adaptability, as well as the need for off-site prefab integration to speed up construction [751-758].
MAJOR DISCUSSION POINT
Prefabrication and training as levers for rapid, skilled deployment
AGREED WITH
Moderator, Sanjay Kumar Sainani
S
Sanjay Kumar Sainani
3 arguments170 words per minute2078 words733 seconds
Argument 1
Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling
EXPLANATION
Sanjay explains that using standardized GPU pods enables data centres to support several generations of GPUs without major redesign, simply by reconfiguring cabinets, thereby ensuring efficient scaling and future‑proofing.
EVIDENCE
He describes pods that support three GPU generations, the ability for customers to mix GPU platforms within a pod, and the process of reconfiguring cabinets while keeping power and cooling infrastructure unchanged [692-699].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standardised GPU pods that support multiple generations of GPUs are presented in [S38]; the pod-centric, future-proof design framework is also covered in [S1].
MAJOR DISCUSSION POINT
Standardised pods for seamless multi‑generation upgrades
AGREED WITH
Peter Panfil, Jigar Halani, Srikanth Cherukuri
Argument 2
PUE Can Be Misleading; Focus on Thermal‑Cycle Optimisation and Adaptive Cooling
EXPLANATION
Sanjay cautions that PUE figures can be artificially improved by raising ambient temperatures, which may increase server fan power. He advocates optimizing cooling across seasonal thermal cycles and using adaptive cooling technologies to achieve genuine efficiency gains.
EVIDENCE
He explains how raising temperature lowers apparent PUE but raises total power consumption, then discusses seasonal strategies such as free cooling in winter and supplemental chillers in summer to optimise annual PUE [710-720].
MAJOR DISCUSSION POINT
Realistic energy‑efficiency metrics and seasonal cooling strategies
AGREED WITH
Srikanth Cherukuri
Argument 3
Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
EXPLANATION
Sanjay adds that Vertiv’s off‑site prefabrication, combined with training programmes, shortens construction timelines and equips personnel with the necessary skills for AI‑factory operations.
EVIDENCE
He notes the shift from on-site to off-site prefab integration, parallel manufacturing and testing, and the ability to bring fully tested modules into the data hall, thereby condensing build-out time [810-818].
MAJOR DISCUSSION POINT
Off‑site prefab and training to accelerate AI‑factory roll‑out
AGREED WITH
Moderator, Srikanth Cherukuri
M
Moderator
1 argument176 words per minute1315 words447 seconds
Argument 1
Accelerated Skill‑Development Programs (8‑12 week courses) with IITs to Train DC Ops Personnel
EXPLANATION
The moderator describes a partnership with Indian Institutes of Technology to deliver intensive 8‑12 week programmes that train engineers in data‑center operations and maintenance, addressing the talent shortage for scaling AI infrastructure.
EVIDENCE
He outlines the collaboration with IIT-Chennai, the curriculum covering operation and maintenance, the duration of the programmes, and the availability of these courses online for anyone to enrol [778-784].
MAJOR DISCUSSION POINT
Building a skilled workforce for AI data‑center operations
AGREED WITH
Srikanth Cherukuri, Sanjay Kumar Sainani
A
Audience
1 argument128 words per minute418 words195 seconds
Argument 1
Concern Over AI Developing Independent Consciousness and Potential Societal Impact
EXPLANATION
An audience member questions whether AI could develop its own subconsciousness, acting independently of human direction, and what implications that might have for society.
EVIDENCE
The participant asks, “What if AI get their own subconsciousness? They don’t need humans to just act” [436-437].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks associated with autonomous AI systems, including weaponisation and loss of human control, are discussed in [S35]; broader societal implications of AI-driven job changes are examined in [S36]; the notion of AI enhancing rather than supplanting humanity provides a counter-perspective in [S37].
MAJOR DISCUSSION POINT
Ethical and societal risks of autonomous AI
Agreements
Agreement Points
AI sovereignty must be paired with innovation, requiring on‑premise solutions and indigenous components
Speakers: Nitin Gupta, Sudeesh VC Nambiar, Jigar Halani
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing Indigenous Layer and Startup Collaboration Strengthen the Solution
All three speakers stress that AI sovereignty does not have to limit innovation; instead, solutions such as Google’s on-premise Data Box [10-14] and the inclusion of an indigenous AI layer together with startup partners [21-25] demonstrate a hybrid approach that keeps data within India while leveraging cutting-edge technology. Jigar further links this to sovereign data-center layers and the need for locally owned inference stacks [470-504].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on on-premise, indigenous AI aligns with India’s sovereign AI policy that calls for domestic capability building and data-center sovereignty, as highlighted in the Data Sovereignty panel at the India AI Impact Summit and NVIDIA’s partnership to foster indigenous layers [S59][S60][S52].
Rapid deployment of AI capacity requires a chip‑first, pod‑centric, modular design (“AI factories”) to achieve speed at scale
Speakers: Peter Panfil, Jigar Halani, Srikanth Cherukuri, Sanjay Kumar Sainani
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling
Peter outlines a three-pillar approach that begins with the GPU chip and builds modular pods for fast token generation [106-121]. Jigar echoes this by describing AI-factory pods as the fundamental building block and promoting reference designs that support multiple GPU generations [99-104][254-259]. Srikanth reinforces the need to start design from the GPU, using row-level density and digital twins to future-proof deployments [628-658]. Sanjay adds that standardized pods enable seamless multi-generation upgrades without redesigning power or cooling infrastructure [692-699].
POLICY CONTEXT (KNOWLEDGE BASE)
Modular, prefabricated pod designs are promoted as faster, lower-risk ways to build AI infrastructure, contrasting with traditional sequential construction methods (see modular construction overview) [S53] and addressing the limitations of legacy hardware not suited for AI workloads [S57].
Inclusivity and bridging the digital divide are essential for AI adoption in India
Speakers: Akanksha Swarup, Nitin Gupta, Peter Panfil
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Will Become an Autonomous Background Function (like breathing) that Frees Human Capacity
Akanksha raises concerns about reaching under-privileged and rural users [4-6][49-53]. Nitin responds by highlighting Google’s free JEE mock-exam service on Gemini, illustrating a concrete step toward inclusive AI education [58-62]. Peter later emphasizes AI’s societal benefits, such as traffic-management systems that improve daily life for all citizens [430-434].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive ICT policies and digital-literacy initiatives stress rural broadband expansion, affordable connectivity, and targeted programs for underserved groups, providing the policy backdrop for bridging the digital divide in AI deployment [S63][S64][S66][S68].
Scaling AI infrastructure demands accelerated skill‑development and training programmes
Speakers: Moderator, Srikanth Cherukuri, Sanjay Kumar Sainani
Accelerated Skill‑Development Programs (8‑12 week courses) with IITs to Train DC Ops Personnel Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
The moderator describes an 8-12 week partnership with IIT-Chennai to train data-center operations staff [778-784]. Srikanth highlights Vertiv’s prefabricated, modular pods and the importance of off-site integration to shorten build times while also noting training as part of the solution [751-758][811-818]. Sanjay echoes this by detailing multiple training programmes (including the same 8-12 week format) aimed at developing design, engineering and operational expertise [796-804].
POLICY CONTEXT (KNOWLEDGE BASE)
AI policy pathways highlight the need to complement human labour with upskilling and new-job creation, calling for accelerated training programmes to support AI infrastructure scaling [S51][S66].
Energy efficiency must be addressed realistically; PUE alone can be misleading and telemetry integration is needed
Speakers: Sanjay Kumar Sainani, Srikanth Cherukuri
PUE Can Be Misleading; Focus on Thermal‑Cycle Optimisation and Adaptive Cooling Integrating Chip‑Level Telemetry with Data‑Center Controls Enables Real‑Time Energy Optimisation
Sanjay warns that raising ambient temperature can artificially improve PUE while increasing overall power use, and proposes seasonal cooling strategies for genuine efficiency gains [710-720]. Srikanth points out the current gap where chip telemetry does not communicate with data-center management, and cites NVIDIA’s reference design that aims to bridge this gap for automatic energy optimisation [739-748].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry commentary notes that PUE is often misused as a sole metric, urging more comprehensive telemetry and realistic energy-efficiency assessments; this mirrors broader concerns about information gaps in energy-cost forecasting [S48][S50].
Similar Viewpoints
All four speakers advocate a modular, GPU‑centric pod architecture that begins with the chip and is designed for rapid, scalable deployment and future upgrades, emphasizing reference designs, digital twins and row‑level planning [106-121][99-104][254-259][628-658][692-699].
Speakers: Peter Panfil, Jigar Halani, Srikanth Cherukuri, Sanjay Kumar Sainani
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling
Both stress that AI solutions must be tailored to Indian data‑sovereignty requirements while still delivering innovative capabilities, whether through on‑premise Google Data Boxes or indigenous AI layers that protect critical data [10-14][21-25].
Speakers: Nitin Gupta, Sudeesh VC Nambiar
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing
Both highlight the need for inclusive AI services that reach underserved populations, with Nitin citing free educational tools as an example of bridging the divide [4-6][49-53][58-62].
Speakers: Akanksha Swarup, Nitin Gupta
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box
Unexpected Consensus
Both large multinational tech firms (Google and NVIDIA/Vertiv) and Indian public‑sector operators emphasize the development of indigenous AI layers and local data‑center sovereignty
Speakers: Nitin Gupta, Sudeesh VC Nambiar, Jigar Halani, Peter Panfil
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing Indigenous Layer and Startup Collaboration Strengthen the Solution AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid
It is surprising that representatives from competing global vendors converge on the importance of building indigenous, sovereign AI capabilities within India, rather than solely promoting their own proprietary stacks. This shared stance underscores a broader national priority for data sovereignty and local innovation [10-14][21-25][470-504][106-121].
POLICY CONTEXT (KNOWLEDGE BASE)
Collaborative efforts between government and private sector to secure data-center sovereignty and build indigenous AI stacks are documented in panel discussions and partnership announcements, underscoring a shared policy goal [S59][S60].
Overall Assessment

The panel shows strong convergence on three core themes: (1) AI sovereignty must be paired with innovation through on‑premise and indigenous solutions; (2) rapid, chip‑first, pod‑centric designs are essential to achieve speed at scale; (3) inclusive access, skill development and realistic energy‑efficiency measures are critical for sustainable AI deployment in India.

High consensus – most speakers, across different organisations (Google, NVIDIA, Vertiv, Indian railways, government‑linked bodies), repeatedly echo the same priorities, indicating a unified strategic direction that can inform policy and investment decisions.

Differences
Different Viewpoints
Preferred design methodology for AI‑factory infrastructure – chip‑first modular pods versus row‑level, digital‑twin driven planning
Speakers: Peter Panfil, Srikanth Cherukuri
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry
Peter argues that AI infrastructure should be built by starting at the GPU chip level, creating repeatable GPU-pods that are then replicated (“chip-first” approach) [106-121]. Srikanth counters that designers should move beyond rack-level density to row- or data-hall-level planning, using digital twins and bounding-box concepts to simulate and future-proof deployments, and integrate chip-level telemetry with data-center controls [665-666][739-748]. The two positions differ on what the primary design abstraction should be – individual chips/pods versus holistic row-scale simulation.
Difficulty of retrofitting existing data‑centers for AI workloads
Speakers: Srikanth Cherukuri, Sanjay Kumar Sainani
Retrofit is extremely difficult and scary for enterprises Modular pod approach allows simple re‑configuration across GPU generations
Srikanth describes retrofitting AI workloads into legacy facilities as a “cumbersome” and “scary” process, noting the many regulatory and engineering hurdles involved [751-758]. Sanjay, by contrast, says that with a standardized GPU-pod you can simply re-configure cabinets to accommodate new GPU generations without major infrastructure changes, making the upgrade path straightforward [692-699]. The disagreement centres on how hard it is to adapt existing sites to AI-centric designs.
POLICY CONTEXT (KNOWLEDGE BASE)
Retrofitting legacy facilities is portrayed as challenging compared with modular, prefabricated approaches that simplify AI-focused upgrades, reflecting industry observations on legacy hardware constraints [S53][S57].
How to achieve inclusive AI access for under‑privileged and rural populations
Speakers: Akanksha Swarup, Nitin Gupta
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box
Akanksha explicitly asks how Google will bridge the digital divide and make AI tools accessible to under-served communities, citing the Prime Minister’s concern [49-53]. Nitin replies by highlighting a single initiative – free JEE mock exams on Gemini – as the example of inclusivity, without addressing broader rural connectivity or affordability issues [58-62]. The two speakers differ on the scope and adequacy of measures needed to ensure inclusive AI deployment.
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports call for a multi-pronged strategy-combining infrastructure investment, digital-literacy programs, and policy safeguards-to ensure AI benefits reach underserved and rural communities [S63][S64][S66][S68].
Unexpected Differences
Retrofitting legacy data‑centers – perceived difficulty versus modular simplicity
Speakers: Srikanth Cherukuri, Sanjay Kumar Sainani
Retrofit is extremely difficult and scary for enterprises Modular pod approach allows simple re‑configuration across GPU generations
Both speakers are from the same ecosystem (Vertiv/NVIDIA) yet present opposite views on how hard it is to adapt existing facilities for AI workloads. Srikanth paints retrofitting as a major obstacle, while Sanjay suggests the pod model makes upgrades trivial. The contrast was not anticipated given their shared organisational background.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between perceived retrofitting difficulty and the simplicity of modular pod deployment is echoed in discussions of prefabricated data-center construction versus traditional on-site builds [S53][S57].
Scope of inclusivity measures – narrow educational pilot versus broader rural access
Speakers: Akanksha Swarup, Nitin Gupta
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box (example of free JEE mock exams)
Akanksha’s question seeks systemic solutions for under‑served populations, yet Nitin’s response focuses on a single educational offering, which does not address connectivity, affordability, or multilingual access. The mismatch between the breadth of the concern and the narrowness of the answer was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates highlight tension between limited pilot-scale digital-literacy initiatives and broader systemic measures such as rural broadband expansion and inclusive ICT frameworks [S63][S68].
Overall Assessment

The discussion revealed three main fault lines: (1) the optimal design methodology for AI‑factory roll‑out (chip‑first pods vs row‑level, digital‑twin planning); (2) the perceived difficulty of retrofitting existing data‑centers versus the promise of modular pod upgrades; (3) the adequacy of inclusivity initiatives, with a narrow corporate example contrasted against a broader policy‑level demand. While participants share common goals of speed, scale, sustainability and skill development, they diverge on the practical pathways to achieve them.

Moderate – disagreements are technical and strategic rather than ideological, but they signal potential coordination challenges for policy makers and industry partners. If unresolved, differing design philosophies could lead to fragmented investments, and an insufficient focus on inclusive access may limit the societal impact of AI deployments.

Partial Agreements
All four speakers concur that AI infrastructure must be deployed rapidly and at large scale, and that a pod‑centric, modular architecture is central to achieving this. They differ on the sequencing (chip‑first vs row‑level planning) but share the overall goal of fast, scalable rollout [106-121][254-259][692-699][665-666][739-748].
Speakers: Peter Panfil, Jigar Halani, Sanjay Kumar Sainani, Srikanth Cherukuri
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry
All agree that energy efficiency is a critical concern. Peter stresses sustainable design from the chip onward, Sanjay warns against superficial PUE improvements and advocates seasonal cooling strategies, while Srikanth acknowledges PUE is often misused and calls for telemetry‑driven optimisation. They share the objective of genuine energy savings but propose different levers [106-121][166-170][710-718][739-748].
Speakers: Peter Panfil, Sanjay Kumar Sainani, Srikanth Cherukuri
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid (includes sustainability) PUE Can Be Misleading; Focus on Thermal‑Cycle Optimisation and Adaptive Cooling Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry (mentions PUE abuse)
Both highlight the need for rapid, structured training to address the talent shortage for AI‑factory operations. The moderator describes an 8‑12 week IIT‑Chennai programme, while Srikanth points to Vertiv’s broader training and prefab‑system initiatives, indicating consensus on up‑skilling as essential [778-784][811-818].
Speakers: Moderator, Srikanth Cherukuri
Accelerated Skill‑Development Programs (8‑12 week courses) with IITs to Train DC Ops Personnel Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
Takeaways
Key takeaways
AI sovereignty and innovation must be pursued together; India aims to become a global AI hub with purpose‑driven, trustworthy models (Bharat GPT). Google is expanding Indian data‑center capacity and offering an on‑premise “Data Box” that runs Gemini AI services, providing hardware control and data residency. Inclusivity is a priority – examples include free Gemini‑powered JEE mock exams and efforts to bridge the digital divide in rural areas. Domain‑specific AI (e.g., IRCTC ticketing) is being used to mitigate demand‑supply mismatches and bot abuse, leveraging both global and indigenous models in partnership with Indian startups. Building AI infrastructure requires “speed at scale”: start design from the GPU chip, use modular pod‑based reference designs, and adopt an inside‑out (chip‑to‑grid) approach. Future‑proof data‑center design should focus on row‑level density, digital‑twin simulations, and integrated chip‑to‑facility telemetry to optimise energy use. Energy efficiency discussions highlighted the limits of PUE as a metric and emphasized thermal‑cycle‑aware cooling, liquid cooling, and adaptive PUE management. Talent development is critical; Vertiv and partners are launching 8‑12‑week training programmes with IITs and other institutions to create a skilled AI‑factory workforce. AI is envisioned as an autonomous background function that frees human capacity, while concerns about AI developing independent consciousness were raised.
Resolutions and action items
Google will deploy new data centres in Vizag and make the on‑premise Data Box with full Gemini AI services available to Indian customers. IRCTC will continue to use a hybrid AI solution – a global core model plus an indigenous layer built with Indian startups – to combat ticket‑booking bots. Vertiv, NVIDIA and partners will publish and promote GPU‑pod reference designs (including DSX pods) and encourage adoption of NVIDIA‑ready certifications. Stakeholders agreed to accelerate AI‑factory deployments using modular pod construction, digital‑twin validation and chip‑first design methodology. Skill‑development programmes (8‑12 week courses) will be rolled out in collaboration with IIT Chennai and other institutions to train data‑center ops and design talent. Google will keep Gemini‑based educational tools (e.g., JEE mock exams) free for students to improve inclusive access. Commitment to share learnings from US/European AI‑factory deployments with Indian teams to avoid repeat mistakes.
Unresolved issues
Concrete roadmap and timeline for achieving full AI sovereignty (large‑scale indigenous LLMs) beyond the current small‑to‑mid‑size models. Specific strategies for extending inclusive AI services to remote, under‑connected rural populations beyond pilot education tools. Detailed plan for retrofitting existing low‑density data centres to high‑density GPU pods, including cost, regulatory and operational challenges. Metrics and governance framework for measuring real energy savings versus PUE manipulation; how to standardise chip‑to‑DC telemetry integration. Answers to audience concerns about AI developing independent consciousness and the societal implications were not fully addressed. Exact financial models and ROI calculations for gigawatt‑scale AI factories, especially for enterprises with limited capital.
Suggested compromises
Balancing sovereignty with innovation – using global technology (Google, NVIDIA) together with indigenous layers and data to meet local regulatory needs. Hybrid deployment model for IRCTC: combine global AI capabilities with locally developed models to satisfy both performance and data‑residency requirements. Adopt a chip‑first design while still considering grid constraints – start from GPU specifications but keep flexibility for future power‑infrastructure upgrades. Use modular pod designs that can be upgraded across GPU generations, reducing the need for costly full‑scale rebuilds. Combine rapid deployment (speed) with future‑proofing (row‑level density, digital twins) to meet immediate demand while preserving long‑term ROI.
Thought Provoking Comments
Follow-up Questions
What are the capabilities, deployment models, and adoption status of Google’s indigenous data box solution for on‑premise AI workloads in India?
Understanding this solution is key to assessing how Indian enterprises can achieve data sovereignty while leveraging Google’s AI services.
Speaker: Nitin Gupta
How can Indian startups effectively collaborate with global technology providers to develop and deploy indigenous AI models for high‑demand services like IRCTC?
Exploring partnership frameworks and capacity‑building measures will help create locally‑tailored AI solutions and reduce reliance on external models.
Speaker: Sudeesh VC Nambiar
What specific cooling and energy‑efficiency strategies can be employed to improve PUE across India’s diverse climate zones (e.g., 10‑52 °C range)?
Tailored approaches are needed to optimize data‑center energy use in varying thermal environments, which is critical for sustainable AI scale‑up.
Speaker: Sanjay Kumar Sainani
How can chip‑level telemetry be integrated with data‑center‑level telemetry to enable automated, real‑time energy‑optimization in AI factories?
Automation of telemetry across hardware and facility layers is essential for future‑proof, energy‑efficient AI infrastructure.
Speaker: Srikanth Cherukuri
What are the most effective methods for retrofitting existing data centers to support high‑density GPU workloads without prohibitive cost or downtime?
Retrofitting is a practical challenge for many operators; research into modular, plug‑and‑play solutions can accelerate AI adoption.
Speaker: Srikanth Cherukuri, Sanjay Kumar Sainani
What comprehensive talent‑development roadmap is required to scale AI‑data‑center operations (design, engineering, ops) to meet the projected doubling of capacity each year?
A skilled workforce is a bottleneck; systematic training programs and industry‑academic collaborations are needed to sustain rapid growth.
Speaker: Dal Bhanushali (audience)
How will India’s DPDP (Data Protection) law influence the location of AI compute workloads and the domestic demand for AI infrastructure?
Policy impacts on data residency could drive significant shifts in where AI processing occurs, affecting infrastructure planning.
Speaker: Jigar Halani
Is the target of 10‑12 GW AI compute capacity in India within the next three years realistic, and what detailed roadmap (technology, financing, supply chain) is required to achieve it?
Assessing feasibility and outlining concrete steps is crucial for investors and policymakers to support the AI ecosystem.
Speaker: Jigar Halani, Peter Panfil
What roadmap and investment are needed to develop indigenous Indian AI chips to complete the sovereign AI stack?
Domestic chip development is identified as the missing layer for full AI sovereignty and requires focused research and funding.
Speaker: Jigar Halani
What measurable impact has Google’s free JEE mock‑exam offering had on educational inclusivity for under‑privileged students in rural India?
Evaluating outcomes will inform future initiatives aimed at bridging the digital divide in education.
Speaker: Nitin Gupta
How can the data‑cleaning and model‑training timelines for foundation models in India be shortened without compromising model quality?
Accelerating these phases would speed up AI development cycles and improve competitiveness.
Speaker: Jigar Halani
How can reference designs for AI‑focused data centers be standardized across multiple GPU generations to ensure future‑proofing and avoid costly redesigns?
Standardized, modular designs are essential for rapid scaling and long‑term cost efficiency.
Speaker: Srikanth Cherukuri, Sanjay Kumar Sainani
What financial and operational strategies (e.g., ‘speed to token’) can ensure a viable return on investment for large‑scale AI infrastructure deployments?
Understanding ROI dynamics is vital for sustaining capital‑intensive AI projects.
Speaker: Sanjay Kumar Sainani
Beyond educational tools, what additional AI‑driven services can Google provide to reduce the digital divide for rural and under‑served populations in India?
Identifying broader inclusive applications will help shape policies and product roadmaps for equitable AI access.
Speaker: Akanksha Swarup (question to Nitin Gupta)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.