Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

20 Feb 2026 12:00h - 13:00h

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session featured Giordano Albertazzi, CEO of Vertiv, who emphasized that while AI discussions often highlight software capabilities, the physical data-center infrastructure that powers AI is equally critical [9-15]. He outlined Vertiv’s role in delivering power, cooling and overall data-center infrastructure, noting the company’s evolution from Emerson to an independent, publicly-traded firm with deep industry expertise [20-22].


Albertazzi explained that the rapid adoption of GPUs has driven extreme rack densification, with power per rack rising from 10-20 kW to potentially 150 kW or even a megawatt, fundamentally altering data-center design [30-33]. This shift requires a coordinated “body” of power-train and thermal systems rather than isolated components, and Vertiv aims to provide fully orchestrated solutions that integrate power, cooling and heat-reuse [45-48][70-76]. He highlighted the move from individual servers to AI pods and ultimately to treating an entire data center as a single compute unit capable of gigawatt-scale workloads [78-84].


To meet the speed and scale demands, Vertiv offers prefabricated, factory-tested modules-such as the “OneCore” system-that can cut deployment time by up to 85 % compared with traditional builds [91-97][118-124]. Speaker 3 contrasted conventional sequential construction with prefabricated modular approaches, noting that Vertiv’s solutions combine the benefits of both methods [118-122]. The company’s close collaboration with NVIDIA enables reference designs that match AI workloads and accelerates market adoption [54-58].


Albertazzi stressed India’s strategic importance because of abundant power, favorable demographics and existing Vertiv presence, and announced plans to expand capacity and invest further in the region [52-53][98-108][110-112]. He noted that faster, larger deployments create challenges in time and scale, which Vertiv addresses through integrated, resilient designs [49-51][89-90]. He also described Vertiv’s “future-resilient” architecture that can evolve as AI power densities increase [88-90]. Concluding with optimism, he asserted that the evolving infrastructure will sustain AI growth worldwide and that Vertiv is positioned to lead this transformation [113-115].


Keypoints


Major discussion points


The physical infrastructure (power and cooling) is the foundation that makes AI possible.


Albertazzi stresses that the “very important physical part of AI… makes AI actually possible” and that Vertiv’s role is to supply the best power-train and thermal chain for AI workloads [13-18][30-34].


AI workloads are driving extreme densification, requiring new power-density and voltage architectures.


He notes that racks are moving from 10-20 kW to 30-150 kW and even a megawatt per rack, and that the industry is migrating toward 800-volt DC power to handle this density [30-33][66-67].


Vertiv is promoting modular, pre-engineered solutions (OneCore/OneVert) to speed up deployment and cut labor.


The “fully pre-engineered, defined data center” (OneVert) and “repeatable converged infrastructure” that can scale from 12.5 MW to gigawatts are highlighted, along with prefabrication that can reduce build time by up to 85 % [60-62][86-88][96-97].


India is positioned as a strategic hub for AI data-center growth.


Albertazzi points to India’s abundant power, favorable demographics, and existing Vertiv presence, stating the company will “invest… expand capacity” and sees the country as a global AI hub [98-108].


Partnership with NVIDIA and a shift from server-centric to pod-/data-center-as-a-computer architectures.


He praises the collaboration with NVIDIA on reference designs and explains that the unit of compute is evolving from individual servers to AI pods and ultimately to an entire data center operating as a single computer [54-58][78-84].


Overall purpose / goal


The presentation aims to convince the audience that robust, high-density power and cooling infrastructure is the critical enabler for the AI revolution, to showcase Vertiv’s innovative, modular solutions (especially OneCore/OneVert) that can meet these demands quickly and at scale, and to underline the company’s strategic focus on India as a growth market while highlighting its partnership with NVIDIA.


Overall tone


The tone is consistently upbeat, confident, and promotional. Albertazzi begins with enthusiasm about AI’s possibilities, moves into technical detail with authority, then shifts to an optimistic, forward-looking stance when discussing India and future investments. Throughout, the language remains positive (“thrilled,” “optimistic,” “excited”) and never turns critical or defensive. No major tonal shift occurs; the optimism intensifies toward the end as he emphasizes market opportunities and partnerships.


Speakers

Speaker 1


– Role/Title: Moderator / event host who introduces speakers [S4][S6]


– Area of expertise:


Speaker 3


– Role/Title:


– Area of expertise:


Giordano Albertazzi


– Role/Title: Chief Executive Officer, Vertiv (Representative from Vertiv) [S7]


– Area of expertise: Digital infrastructure solutions for data centers, AI-related power and cooling systems


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Introduction (Speaker 1) – Speaker 1 opened the session by introducing Mr Giordano Albertazzi, chief executive officer of Vertiv, a global provider of digital-infrastructure solutions for data centres and communication networks, and noted Vertiv’s ambition to accelerate innovation and support critical applications worldwide [1-3].


Physical layer of AI (Albertazzi) – Albertazzi observes that most AI conversations celebrate what AI can do, especially in India, while the “very important physical part of AI… makes AI actually possible” is often ignored - a point he makes early in his remarks about the need for power, cooling and overall data-centre infrastructure [9-15].


Vertiv background (Albertazzi) – He outlines Vertiv’s heritage: originally part of Emerson Electric, the company has been an independent, publicly-traded entity for almost a decade and brings decades of expertise in delivering the physical layer that enables the rapidly evolving AI-IT stack [18-22].


Extreme densification (Albertazzi) – Rack power density has jumped from the historic 10-20 kW range to 30-150 kW today, with future designs envisaging up to 1 MW per rack, fundamentally altering data-centre design and power-heat management requirements [30-34].


Brain-body analogy & orchestrated infrastructure (Albertazzi) – Using a biological analogy, Albertazzi explains that just as the brain needs a body, the AI “brain” (the IT stack) requires a well-orchestrated “body” of power-train and thermal systems; the power chain must move from grid to chip, and the thermal chain must handle heat extraction, rejection and reuse - all as an interoperable whole [38-48][70-76]. To accommodate the higher densities, Vertiv is transitioning to 800-V DC distribution, which better supports the increased power loads [66-76].


Shift in compute unit (Albertazzi) – He notes that the basic compute unit is shifting from the traditional server to an AI pod, and ultimately to the entire data centre operating as a single computer capable of gigawatt-scale workloads [78-84].


Vertiv’s modular, pre-engineered solution (Albertazzi) – Vertiv showcases a modular data-centre solution – referred to as “one vertigo, one core” (OneVert/OneCore) – that provides a repeatable, converged infrastructure and can be scaled from 12.5 MW up to gigawatt capacities, delivering a “future-resilient” platform [60-62][86-88].


Prefabrication advantage (Albertazzi) – The VertiSmart Run prefabrication methodology can cut deployment timelines by roughly 85 %, delivering an order-of-magnitude speed-up compared with traditional, labour-intensive builds [91-97].


India as a strategic AI hub (Albertazzi) – Albertazzi highlights India’s abundant power supply, favourable demographics, and Vertiv’s long-standing presence as reasons to view the country as a global AI centre, and he announces plans to expand capacity and increase investment in the Indian market [52-53][98-108][110-112].


Collaboration with NVIDIA (Albertazzi) – Joint reference designs with NVIDIA are being co-developed to align infrastructure with AI workloads, positioning the partnership as a driver of market leadership and accelerated adoption of AI-optimised data centres [54-58].


Future-resilient design (Albertazzi) – He stresses the need for “future-resilient” designs that can withstand rapid deployment pressures while maintaining reliability, noting that Vertiv’s integrated approach addresses both time-to-market and scale constraints [49-51][89-90].


Closing (Speaker 1 & Albertazzi) – In closing, Albertazzi expresses strong optimism about the role of robust infrastructure in sustaining AI’s growth, thanks the audience for their attention, and is acknowledged by the moderator for his impactful address [113-115][117].


Session transcriptComplete transcript of the session
Speaker 1

Well, ladies and gentlemen, now it’s my pleasure to invite our next speaker, Mr. Giordano Albertazzi, who is the chief executive officer of Vertiv, a global company that provides digital infrastructure solutions for data centers, communication networks. Under his leadership, Vertiv is advancing its role as a global industry leader by accelerating innovation, strengthening technology leadership, and enabling the digital infrastructure that powers critical applications worldwide. Ladies and gentlemen, please welcome Mr. Giordano Albertazzi.

Giordano Albertazzi

Thank you very much. The clicker? Oh, yeah, here. Better with the clicker. Good afternoon, everyone. And it’s absolutely a pleasure and an honor being on this stage where so many distinguished presenters. In the last two days, I’ve had the opportunity to talk about AI. An astonishing thing to me is that the majority of the AI conversations, as it should be, are about what AI can do. Very interesting presentation, just finished, tells about all the beautiful things that AI can do and particularly what AI can do here in India. But when we talk AI, we also talk about data centers. But let me go then to the physical part of AI, not just what AI can do for us.

There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s that physical part that makes AI actually possible. So I’ll talk about the physical part today. I will talk about the power. The cooling, the data center infrastructure. Vertiv and myself, with Vertiv, have been in the industry for decades. Well, Vertiv longer than me. It used to be part of Emerson, Emerson Electric, and we are almost 10 years as an independent company now publicly traded in New York. But what we do is really make sure that that physical part is provided with the best technology that supports the continuous evolution of the AI IT stack as those rapidly, almost exponentially, and I’m talking almost exponentially from a mathematical standpoint, evolve.

And it’s no easy task, a task that we would do very well because we know the space a lot. We have a lot of innovation. But there are several dimensions. to this. One is the extreme densification. Now, we all know what GPUs are. Probably two years ago, majority of people didn’t have any clue about what a GPU is. But now, GPU, NVIDIA is absolutely central to everything, all the conversation about AI. Well, that phenomenal evolution from a technology standpoint is changing the DNA of a data center. What used to be a rack with IT inside 10, 15, 20 kilowatt per rack is rapidly becoming more dense and with more power and heat to dissipate in it. This is going to 30, 50, 150 kilowatts per rack all the way in the possible future.

One megawatt per rack. That’s a lot of power in a single rack. The design of a data center is changing dramatically. As this design changes, of course, also the technology that supports it needs to change. But let me go back to AI, artificial intelligence. Let me go back to, and let me draw a parallel. Human intelligence. Human intelligence happen in the brain. But the brain doesn’t survive without a body. What we are, what we do at Vertiv, make that body, provide that technology for that body so that the brain can function, and that brain is the IT stack. But not only that brain can function, but also can produce intelligence. And that’s what an AI does.

That’s what an AI factory, an AI data center is doing. But just like the body, historically, data centers and data center engineering was viewed as disparate systems coming together. Now, we cannot think about a human body or anybody as individual parts, a chiller, a liquid cooling unit, an interruptible power supply, or whatever else in the powertrain or thermal chain you can think of. Everything needs to be orchestrated. Everything needs to be interoperable. Everything must be thought of as one thing. And that’s what we do in a world that is extraordinarily challenging, but it’s a challenge that we, of course, respond to very successfully, challenging in terms of time of deployment and in terms of scale of deployment.

Okay. Data centers need to be. developed faster and faster and are becoming bigger and bigger. You heard that. It is about data centers. India is certainly privileged from an AI standpoint also because there is a lot of power available that can be harnessed for more and more powerful and larger data centers. Now, as that happens, again, if you think, go back to my analogy of the body, you think about a system, you think about everything that is the body of artificial intelligence, then it is about changing the way we build that body from one piece at a time with a lot of activity happening on site, laborious, hard from a quality standpoint, to most integrated at factory level and deployment.

As NVIDIA continues to lead the world in terms of technology, in terms of IT stack, but also in terms of thought process for the infrastructure. And it’s something that we do a lot together. Well, then it is not just about the infrastructure and the speed and the size and the scale. It’s also about optimizing the infrastructure with reference designs that exactly target that type of application. So we, of course, are thrilled and always honored to partner with NVIDIA in this adventure and venture and lead the market in this respect. So here you have an example of what we call one vertigo, one core. There’s an example of a fully pre -engineered, defined data center. But when we talk about the body, the body of AI, the data center, then let’s talk very simply.

We talk about three fundamental, fundamental elements of that body. One is the powertrain. So everything that goes from the grid, if you will, for your utility, takes that power all the way to the chip. That power infrastructure is changing, is evolving as the power density changes. And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure. I’m going technical on you. Some of you I know are very technical, so I’m not afraid about that. But I will not go deep. So everything you see on the left side of this is exactly a representation of that powertrain. So bring the energy, take that energy to the chip. Then the chip and all the electronic components in a server generate heat.

And that heat can be very dense. And require very, very advanced… cooling mechanisms. And that’s the beginning of what we like to call a thermal chain that starts, and it’s what you see on the right side of this chart from the chip all the way to the heat extraction, the heat then rejection, or even more importantly, and more extensively so, is the heat reuse. So this is the system, the fundamental systems of this body. But again, it’s not just about the components of the system, it’s how the entire system works. And more and more, we see that when we think about the AI IT infrastructure, what used to be thought of as a server at a time is becoming really an AI, AI.

pod, an AI unit at a time. The unit of compute is no more the server. It is the pod. Unit of compute is not even the pod. It’s the entire data center operating as one single computer. A unit of compute that can go all the way to gigawatts. So it is about making sure, and I believe we do it very well, that I say uniquely well, but of course I root for ourselves. It is about making that infrastructure available at scale and in a very easy modular to deploy fashion. And that’s what we do. So a repeatable converged infrastructure major. So we have a lot of building blocks that can go from a 12 .5 megawatts all the way to gigawatts, all the way to gigawatts.

So clearly it’s not just about building that infrastructure, but that infrastructure over time needs to be, like we like to say, future resilient. Some people, like myself, have been in the industry of data center for quite some time, and it’s fascinating the speed at which things happen. And this speed is also enabled by new solutions that make prefabricated and very fast to deploy part of data centers that used to be very, very laborious. Take a data. It’s empty when the building is new. You have to fill it with power, with cooling, with cables. You have to put the racks. very laborious and time consuming. Time to token is of the essence. A prefabrication, for example, with what we do with Verti Smart Run, reduces the time to deploy almost 85%, almost an order of magnitude.

So the industry is changing, not only in scale and in density, but also in the way things are done and deployed. Let me take a different angle now and focus on India. India clearly central to the AI evolution revolution and central certainly in terms of the infrastructure that is being built and the infrastructure that will be built in the future and in the coming years. This infrastructure and the speed at which this infrastructure will be built, of course, will depend, as I was saying, by the ability of the likes of Vertiv, but certainly Vertiv, given our prominent position also in India, to really enable this at scale and at speed in the ways that I explained.

So Vertiv in India has a long tradition. We’ve been here for decades. We have what I believe is an awesome team and awesome partnerships. And now this forum, these sessions, these few days convinced me even more of the importance of India as a place. A place to invest. And invest we will. We are expanding our capacity and will continue to expand capacity. We see India certainly as an extremely promising market. as a hub for AI, not only for India, but globally. So it has got the power availability. Certainly India has got the right demographics. So I couldn’t be more excited about the business in India. I couldn’t be more excited about what we’re doing in India and what our partners are doing in India.

So with that, I’m extremely optimistic. I’m a big optimist about what AI will bring, as we heard. And with that, thank you very much. Thank you.

Speaker 1

Thank you so much, Mr. El -Battazi, for your impactful address and also for…

Speaker 3

Data centers have, up until now, been usually constructed in one of two ways. Traditional data center build follows a sequential process, materials and equipment arriving individually on site, with the build progressing from the ground up. Alternatively, prefabricated modular construction can offer many advantages, such as quicker deployments and risk reduction. Vertiv offers many solutions in this space. However, in the age of increasing IT loads powered by artificial intelligence, there’s another option that combines the advantages of both. Vertiv OneCore. Vertiv power and thermal infrastructure building blocks are inserted into a brand new Vertiv -supplied steel building shell, or an existing building. Infrastructure building blocks are made in controlled factory environments and tested before construction. The system is also equipped with a new, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient,

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Speaker 1 introduced Giordano Albertazzi as chief executive officer of Vertiv, a global provider of digital‑infrastructure solutions for data centres and communication networks.”

The knowledge base identifies Giordano Albertazzi as CEO of Vertiv discussing critical physical infrastructure for AI, confirming his role and the company’s focus on data-centre solutions [S8] and [S7].

Confirmedhigh

“Albertazzi observes that most AI conversations celebrate what AI can do, especially in India, while the “very important physical part of AI… makes AI actually possible” is often ignored.”

Albertazzi’s emphasis on the overlooked physical infrastructure that enables AI is directly stated in the knowledge base [S8].

Additional Contextmedium

“Rack power density has jumped from the historic 10‑20 kW range to 30‑150 kW today, with future designs envisaging up to 1 MW per rack.”

The source notes the evolution from a few kilowatts per rack to 10-30 kW and cites current deployments in India at around 80 kW per rack, confirming the upward trend but not the 1 MW projection [S50].

Confirmedhigh

“India’s abundant power supply, favourable demographics, and Vertiv’s long‑standing presence make the country a global AI centre; Vertiv plans to expand capacity and increase investment in the Indian market.”

Albertazzi’s remarks about India as a strategic AI hub and Vertiv’s activities there are reflected in the knowledge base, which highlights his focus on India’s physical-layer potential and ongoing high-density rack deployments [S8] and [S50]; broader commentary on India’s AI opportunities appears in [S47].

External Sources (50)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
S8
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — – Giordano Albertazzi- Announcer – Giordano Albertazzi- Video presentation Artificial intelligence | Information and c…
S9
https://dig.watch/event/india-ai-impact-summit-2026/heterogeneous-compute-for-democratizing-access-to-ai — That’s the edge cloud. And as you go deeper from there onwards, then you have the data centers. It then mitigates the ov…
S10
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S11
From KW to GW Scaling the Infrastructure of the Global AI Economy — Good morning to all of you. As Rakesh has already introduced, two companies are planning for a lot of things together. A…
S12
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — – Speaker 3 (Janatu): Department of Public Administration, Kumile University Speaker 3: Hello, everyone. My topic is…
S13
Internet Society’s Collaborative Leadership Exchange (CLX) | IGF 2023 Day 0 Event #95 — Speaker 3:I’m Jeremy. I’m from Myanmar. Today I just would like to point out the digital guidelines about the online gov…
S14
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — So we’re going to give like 30 seconds to each of the panelists as they close. I mean, I think on learning you just star…
S15
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S16
Open Mic & Closing Ceremony — 9. Recognition and Appreciation: Hajia Sani: Hmm. Another round of applause, please. Another round of applause. Thank y…
S17
Masterclass#1 — Sherif Hashem :Sure, I’d like to thank all the speakers for such excellent and comprehensive presentations, but I’d like…
S18
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S19
From summer disillusionment to autumn clarity: Ten lessons for AI — Overall, what’s notable in all these political developments is pragmatism. The lofty narratives of last year – like fear…
S20
The Innovation Beneath AI: The US-India Partnership powering the AI Era — So what is going to be scarce in the times to come is not electrification, as Roshani said. We have enough math works wh…
S21
Building Climate-Resilient Systems with AI — How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kin…
S22
AI adoption leaves workers exhausted as a new study reveals rising workloads — Researchers from UC Berkeley’s Haas School of Businessexaminedhow AI shapes working habits inside a mid-sized technology…
S23
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S24
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S25
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — ## The Shift from Knowledge to Data in Policy Language 3. **Processing Architecture Shift**: The transition from CPU-ba…
S26
Nvidia partners with Reliance and Tata to expand AI presence in India’s growing ecosystem — Nvidia, a semiconductor company in California,has revealed plans for partnerships with major Indian corporations, Relian…
S27
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Moderator Alexander E. Brunner opened with a provocative observation based on recent conversations with technology leade…
S28
NVIDIA powers a new wave of specialised AI agents to transform business — Agentic AIhas entereda new phase as companies rely on specialised systems instead of broad, one-size-fits-all models. Op…
S29
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — The speaker stressed that modern data centers can no longer be viewed as disparate systems but must be orchestrated as i…
S30
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — However, it is crucial to address the trust deficit between users and companies. To achieve this, public policy framewor…
S31
Regulating Open Data_ Principles Challenges and Opportunities — Digital ecosystems simply do not function in silos. However, enabling data to move across borders should not mean that c…
S32
From KW to GW Scaling the Infrastructure of the Global AI Economy — Project timelines have compressed from 18 months in cloud world to 4-6 months in GPU world, requiring faster capacity bu…
S33
The Innovation Beneath AI: The US-India Partnership powering the AI Era — So very much working on that. And on your question from an innovation perspective, well, we all know the hype cycle. And…
S34
Building Climate-Resilient Systems with AI — How do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kin…
S35
From KW to GW Scaling the Infrastructure of the Global AI Economy — And it’s going to be a system approach. System. Systems. Think systems. we as an industry have thought boxes for too lon…
S36
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-giordano-albertazzi — There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s t…
S37
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — “And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure.”[50]. “An…
S38
AI adoption leaves workers exhausted as a new study reveals rising workloads — Researchers from UC Berkeley’s Haas School of Businessexaminedhow AI shapes working habits inside a mid-sized technology…
S39
The Global Power Shift India’s Rise in AI & Semiconductors — The panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems,…
S40
Indias Roadmap to an AGI-Enabled Future — Dua argued that India could become a global compute hub, potentially processing 40-50% of the world’s data by leveraging…
S41
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Data sovereignty policies requiring local data storage are essential to drive domestic data center investment and capita…
S42
Welcome Address — India positions itself as a central hub of technology talent, leveraging a strong IT background and dynamic startup ecos…
S43
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — 3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how c…
S44
Nvidia partners with Reliance and Tata to expand AI presence in India’s growing ecosystem — Nvidia, a semiconductor company in California,has revealed plans for partnerships with major Indian corporations, Relian…
S45
Intel to design custom CPUs as part of NVIDIA AI partnership — The two US tech firms, NVIDIA and Intel,have announceda major partnership to develop multiple generations of AI infrastr…
S46
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S47
From India to the Global South_ Advancing Social Impact with AI — And I think with the current government’s focus on multiple domains like logistics, maybe marine, aeronautics, aviation,…
S48
https://dig.watch/event/india-ai-impact-summit-2026/indias-roadmap-to-an-agi-enabled-future — They are not connected end to end in terms of building a digital twin of this electric system, right? We built something…
S49
Comprehensive Summary: The Future of Robotics and Physical AI — And that actually is very important for roboticists to understand. There’s a lot of power in understanding the physical …
S50
Keynote-Olivier Blum — For those who are not very familiar with what is a data center, we are talking, about a couple of years, about a couple …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
G
Giordano Albertazzi
7 arguments123 words per minute1705 words830 seconds
Argument 1
Extreme GPU densification drives high power per rack (Giordano Albertazzi)
EXPLANATION
The rapid adoption of GPUs for AI workloads is dramatically increasing the power density of racks. What used to be 10‑20 kW per rack is now moving toward 30‑150 kW and even up to a megawatt per rack, creating new challenges for data‑center design.
EVIDENCE
Albertazzi describes the shift from modest rack power levels (10-20 kW) to much higher densities, noting that a rack could reach 30, 50, 150 kW and potentially one megawatt as GPU usage expands [25-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Albertazzi’s presentation notes the shift from traditional 10-20 kW racks to 30-150 kW and even megawatt-level racks as AI GPUs proliferate, corroborated by the keynote summary in [S8].
MAJOR DISCUSSION POINT
Physical infrastructure demands of AI workloads
Argument 2
Shift to 800‑V DC power and advanced cooling needed (Giordano Albertazzi)
EXPLANATION
To support the higher power densities, data‑center power architectures are moving toward 800‑volt DC distribution. At the same time, the resulting heat loads require sophisticated cooling and thermal‑chain solutions.
EVIDENCE
He explains that the powertrain is migrating to an 800-V DC infrastructure and that the heat generated by dense compute requires advanced cooling mechanisms and heat-reuse strategies [64-67][73-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He emphasizes migration to 800-volt DC distribution and the need for sophisticated cooling and heat-reuse solutions, as described in the same keynote coverage [S8].
MAJOR DISCUSSION POINT
Physical infrastructure demands of AI workloads
Argument 3
Components must operate as a unified, interoperable body (Giordano Albertazzi)
EXPLANATION
Albertazzi argues that data‑center subsystems—power, cooling, UPS, etc.—cannot be treated as separate pieces. They must be orchestrated and interoperable, functioning as a single integrated “body” that supports AI workloads.
EVIDENCE
He uses a body analogy, stating that data-center components need to be orchestrated, interoperable, and thought of as one thing rather than disparate systems [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The talk uses a body analogy, arguing that power, cooling, UPS and other subsystems must be orchestrated as a single entity; this integrated view is highlighted in the keynote analysis [S8].
MAJOR DISCUSSION POINT
Integrated, orchestrated data‑center design
AGREED WITH
Speaker 3
Argument 4
Compute unit evolving from server to pod to whole‑center scale (Giordano Albertazzi)
EXPLANATION
The traditional server is being replaced by AI pods, and ultimately the entire data centre is treated as a single computer. This shift reflects the need to handle gigawatt‑scale compute for AI.
EVIDENCE
He notes that the unit of compute has moved from the server to the AI pod and now to the whole data centre operating as one computer, capable of gigawatt-level power [78-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Albertazzi describes the progression from individual servers to AI pods and finally to the entire data centre acting as one computer, a shift documented in the keynote summary [S8].
MAJOR DISCUSSION POINT
Integrated, orchestrated data‑center design
Argument 5
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi)
EXPLANATION
Prefabricated, factory‑built data‑center modules can dramatically shorten construction schedules. Vertiv’s Smart Run solution claims to cut deployment time by roughly 85%, enabling faster scaling of AI infrastructure.
EVIDENCE
He describes how prefabrication, exemplified by Verti Smart Run, reduces the time to deploy a data centre by almost 85%, an order-of-magnitude improvement over traditional builds [91-97].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He cites Vertiv’s Smart Run modular, factory-built solution cutting deployment schedules by roughly 85%, a claim supported by the discussion of prefabricated construction benefits in [S8].
MAJOR DISCUSSION POINT
Modular and prefabricated construction for rapid deployment
AGREED WITH
Speaker 3
Argument 6
India’s power availability and demographics position it as a global AI hub (Giordano Albertazzi)
EXPLANATION
Albertazzi highlights India’s abundant power resources and favorable demographics as key factors that make the country an attractive location for large‑scale AI data centres. He sees India as a strategic hub not only for domestic AI but for the global market.
EVIDENCE
He points out that India is privileged from an AI standpoint because of abundant power and the right demographics, emphasizing its role as a global AI hub [52-53][109-111].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel on India’s AI rise highlights the country’s abundant power resources, talent pool and favorable demographics as key factors for becoming a global AI hub, aligning with Albertazzi’s point [S10].
MAJOR DISCUSSION POINT
India’s strategic role in AI data‑center growth
Argument 7
Vertiv’s long‑standing presence and capacity‑expansion plans underscore commitment to India (Giordano Albertazzi)
EXPLANATION
Vertiv has operated in India for decades, building a strong team and partnerships. The company plans to expand capacity further, reinforcing its commitment to making India a major AI data‑center hub.
EVIDENCE
He references Vertiv’s decades-long presence, an “awesome team,” ongoing capacity expansion, and the view of India as an extremely promising market and AI hub [101-108].
MAJOR DISCUSSION POINT
India’s strategic role in AI data‑center growth
S
Speaker 3
1 argument156 words per minute149 words57 seconds
Argument 1
Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
EXPLANATION
OneCore integrates Vertiv’s power and thermal infrastructure modules into a pre‑engineered steel building shell, whether new or retrofitted. This approach merges the speed of modular construction with the robustness of a full building envelope.
EVIDENCE
The speaker explains that Vertiv OneCore inserts power and thermal building blocks into a Vertiv-supplied steel building shell, with components manufactured and tested in a factory before on-site installation [123-125].
MAJOR DISCUSSION POINT
Modular and prefabricated construction for rapid deployment
AGREED WITH
Giordano Albertazzi
S
Speaker 1
1 argument131 words per minute88 words40 seconds
Argument 1
Appreciation and recognition of the impactful address (Speaker 1)
EXPLANATION
Speaker 1 thanks the presenter for his impactful address, acknowledging the value of the contribution to the forum.
EVIDENCE
The closing remark thanks Mr. El-Battazi for his impactful address [117].
MAJOR DISCUSSION POINT
Closing acknowledgment
Agreements
Agreement Points
Modular and prefabricated construction dramatically shortens deployment time for AI data centers
Speakers: Giordano Albertazzi, Speaker 3
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Both speakers stress that Vertiv’s modular, factory-built solutions – Smart Run (which cuts deployment time by about 85%) and OneCore (which inserts pre-engineered power and thermal blocks into a steel building shell) – enable much faster roll-out of large AI-driven data centres [91-97][123-125].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses note that project timelines have compressed from 18 months to 4-6 months, and reference designs with prefabricated modules enable faster capacity building by shifting testing and integration off-site [S32].
Data‑center subsystems must be treated as a single, interoperable system
Speakers: Giordano Albertazzi, Speaker 3
Components must operate as a unified, interoperable body (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Albertazzi uses a body analogy to argue that power, cooling, UPS and other components need to be orchestrated as one entity, while Speaker 3 describes OneCore as a solution that physically integrates power and thermal modules into a single building envelope, reflecting the same integrated-system perspective [45-48][123-125].
POLICY CONTEXT (KNOWLEDGE BASE)
Thought leaders stress that modern data centers should be orchestrated as integrated, interoperable units rather than disparate components [S29], echoing broader policy calls for open, standards-based, interoperable frameworks in digital ecosystems [S30][S31].
Similar Viewpoints
Both emphasize that a coherent, tightly integrated infrastructure – rather than a collection of disparate pieces – is essential for supporting the high‑density AI workloads of the future [45-48][123-125].
Speakers: Giordano Albertazzi, Speaker 3
Components must operate as a unified, interoperable body (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Both present Vertiv’s modular, factory‑built approaches as the answer to the speed and scale challenges posed by AI‑driven data‑center expansion [91-97][123-125].
Speakers: Giordano Albertazzi, Speaker 3
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Unexpected Consensus
Overall Assessment

The speakers converge on two main ideas: (1) modular, prefabricated construction (Smart Run, OneCore) is a key accelerator for AI data‑center deployment, and (2) the physical infrastructure must be conceived as a unified, interoperable system rather than isolated components. These points reflect a clear consensus on the technical and operational pathways needed to meet the rapid growth of AI workloads.

Moderate to strong consensus on infrastructure integration and rapid‑deployment solutions, indicating that participants share a common vision for how the data‑center ecosystem should evolve to support AI. This alignment suggests that industry stakeholders are likely to collaborate on standardising modular designs and integrated power‑thermal architectures.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows strong alignment among the speakers on the need for integrated, high‑density AI data‑center infrastructure and the value of modular, prefabricated construction. The only variation is in the specific Vertiv product highlighted (Smart Run vs OneCore). No substantive contradictions were identified.

Low – the participants largely concur on the challenges and objectives, with only minor differences in preferred implementation pathways, suggesting a cohesive industry stance on accelerating AI‑driven data‑center deployment.

Partial Agreements
Both speakers emphasize the importance of modular, prefabricated solutions to accelerate data‑center deployment, but they highlight different Vertiv offerings – Albertazzi focuses on the Smart Run approach that cuts build time by about 85% [91-97], while Speaker 3 describes the OneCore system that integrates power and thermal modules into a steel building shell [123-125]. The emphasis on distinct products shows agreement on the goal (rapid deployment) but differing views on the preferred solution.
Speakers: Giordano Albertazzi, Speaker 3
Vertiv Smart Run prefabrication reduces deployment time by ~85% (Giordano Albertazzi) Vertiv OneCore combines modular building blocks with a steel shell for fast builds (Speaker 3)
Takeaways
Key takeaways
AI workloads demand extreme physical infrastructure, with GPU densification driving rack power needs from ~20 kW to potentially 1 MW. Data‑center power architecture is shifting toward high‑voltage (800 V DC) systems and advanced cooling/thermal chains, including heat reuse. Modern data‑center design must be fully integrated and orchestrated; the compute unit is evolving from individual servers to pods and ultimately whole‑facility “computers.” Modular, prefabricated solutions (Vertiv Smart Run, Vertiv OneCore) can cut deployment time by up to ~85%, enabling faster scaling of AI‑focused facilities. India is positioned as a strategic global AI hub due to abundant power, favorable demographics, and Vertiv’s long‑standing presence; Vertiv plans to expand capacity and investment in the region. Partnerships with technology leaders such as NVIDIA are central to delivering reference designs that match AI application requirements.
Resolutions and action items
Vertiv will continue to invest in and expand its capacity in India to support AI data‑center growth. Vertiv will deepen its partnership with NVIDIA to develop and deliver AI‑optimized reference designs. Vertiv will promote and deploy its prefabricated, modular solutions (Smart Run, OneCore) to accelerate data‑center deployments.
Unresolved issues
Specific implementation plans for transitioning existing data‑centers to 800 V DC power architectures. Details on how heat‑reuse systems will be integrated at scale and their economic viability. Standardization and interoperability frameworks for fully orchestrated “body‑of‑AI” infrastructure across vendors. Quantitative forecasts for required power and cooling capacity as AI workloads continue to grow beyond current projections.
Suggested compromises
None identified
Thought Provoking Comments
There is an important, very important physical part of AI that sometimes is overlooked. It’s the power, cooling, and data‑center infrastructure that actually makes AI possible.
Shifts the conversation from AI as software to the often‑ignored hardware foundation, reminding the audience that AI’s capabilities are constrained by physical resources.
Redirects the focus of the session to infrastructure challenges, setting up subsequent discussion of power density, cooling, and modular solutions. It primes listeners to consider the broader ecosystem rather than just AI algorithms.
Speaker: Giordano Albertazzi
The DNA of a data center is changing: what used to be a rack with 10‑20 kW is rapidly becoming 30, 50, 150 kW per rack, and potentially up to a megawatt per rack.
Quantifies the exponential growth in power density, highlighting a concrete engineering problem that underpins AI scaling.
Leads to a deeper dive into power‑train evolution and the need for new architectures (e.g., 800‑V DC), influencing later remarks about future‑resilient infrastructure and prefabricated deployment.
Speaker: Giordano Albertazzi
Human intelligence happens in the brain, but the brain doesn’t survive without a body. Vertiv provides the ‘body’—the power and thermal infrastructure—that lets the AI ‘brain’ (the IT stack) function and produce intelligence.
Uses a vivid biological analogy to make the abstract relationship between compute and infrastructure intuitive, reinforcing the interdependence of hardware and AI.
Strengthens the narrative that infrastructure must be orchestrated as a single system, paving the way for the later emphasis on integrated, interoperable solutions.
Speaker: Giordano Albertazzi
The unit of compute is no longer the server; it’s the AI pod, and ultimately the entire data center operating as one single computer, potentially reaching gigawatt scales.
Reframes the scale at which compute is thought about, moving from individual servers to massive, unified facilities, which challenges traditional data‑center design paradigms.
Encourages the audience to think about modular, scalable designs and justifies the push for prefabricated, rapid‑deployment solutions discussed later.
Speaker: Giordano Albertazzi
Prefabricated solutions like VertiSmart Run can reduce time‑to‑deploy by almost 85%, an order of magnitude faster than traditional builds.
Provides a concrete metric that illustrates how new construction approaches can meet the urgent demand for AI‑ready infrastructure.
Introduces a tangible benefit that supports the argument for modular construction, influencing the subsequent speaker to elaborate on Vertiv OneCore as a hybrid solution.
Speaker: Giordano Albertazzi
India is an extremely promising market and hub for AI, not only because of power availability but also due to its demographics; Vertiv is committed to expanding capacity there.
Highlights geographic and strategic considerations, linking infrastructure capability to regional economic factors and positioning India as a focal point for future growth.
Broadens the discussion from technical challenges to market strategy, setting the stage for audience interest in regional deployment models and partnerships.
Speaker: Giordano Albertazzi
Vertiv OneCore combines the advantages of sequential traditional builds and prefabricated modular construction by inserting power and thermal building blocks into a steel shell, tested in a factory before on‑site assembly.
Introduces a hybrid construction model that directly addresses the earlier pain points of speed, quality, and scalability, offering a concrete solution to the problems raised.
Acts as a turning point that moves the conversation from problem description to a specific product offering, prompting listeners to consider implementation pathways and potentially shifting the tone toward actionable outcomes.
Speaker: Speaker 3
Overall Assessment

The discussion was driven forward by a series of insightful remarks that reframed AI from a purely software narrative to a hardware‑centric challenge. Giordano Albertazzi’s analogies, quantitative density figures, and emphasis on modular, rapid‑deployment infrastructure highlighted the urgency and complexity of scaling AI workloads. These points set the stage for Speaker 3’s introduction of Vertiv OneCore, which served as a pivotal moment by presenting a concrete, hybrid solution that directly addressed the earlier identified challenges. Collectively, the comments shifted the dialogue from abstract AI potential to tangible infrastructure strategies, deepening the technical depth and aligning the audience around actionable pathways for building the next generation of AI‑ready data centers, especially in high‑growth regions like India.

Follow-up Questions
What are the technical challenges and solutions for transitioning to 800‑volt DC power infrastructure in high‑density AI data centers?
Understanding DC conversion is critical for supporting future power densities and improving efficiency.
Speaker: Giordano Albertazzi
How can the heat generated by extremely dense AI workloads be effectively captured and reused?
Heat reuse can improve overall energy efficiency and sustainability of AI data centers.
Speaker: Giordano Albertazzi
What best practices enable prefabricated, modular data center construction that reduces deployment time by up to 85%?
Rapid deployment is essential to meet the fast‑growing demand for AI infrastructure.
Speaker: Giordano Albertazzi
How does the shift from server‑based compute to pod‑based and whole‑data‑center compute affect infrastructure design and management?
Redefining the unit of compute changes power, cooling, and networking requirements.
Speaker: Giordano Albertazzi
What specific reference designs are being co‑developed with NVIDIA for AI‑optimized data center deployments?
Reference designs can accelerate adoption and ensure optimal performance for AI workloads.
Speaker: Giordano Albertazzi
What is the projected demand for AI data‑center capacity in India, and how can Vertiv scale its offerings to meet that demand?
Accurate demand forecasting is needed to plan investments and capacity expansion in a key market.
Speaker: Giordano Albertazzi
What regulatory, grid‑capacity, and reliability challenges exist in India for supporting megawatt‑per‑rack power densities?
Addressing grid constraints is essential for deploying ultra‑dense AI racks safely.
Speaker: Giordano Albertazzi
How does Vertiv OneCore integrate with existing building shells versus new steel‑building shells, and what are the trade‑offs?
Clarifying integration options helps customers choose the most suitable deployment path.
Speaker: Speaker 3
What metrics should be used to assess the resilience and future‑proofing of AI‑focused data‑center infrastructure?
Standardized metrics enable objective evaluation of long‑term reliability.
Speaker: Giordano Albertazzi
What are the environmental and sustainability impacts of high‑density AI data centers, particularly regarding cooling and power consumption?
Sustainability considerations are increasingly important for large‑scale deployments.
Speaker: Giordano Albertazzi
How can industry standards ensure interoperability across power, cooling, UPS, and thermal‑chain components in AI data centers?
Interoperability reduces integration risk and simplifies lifecycle management.
Speaker: Giordano Albertazzi
What are the cost implications of adopting 800‑volt DC distribution compared with traditional AC systems in AI data centers?
Cost analysis is needed to justify capital expenditures for new power architectures.
Speaker: Giordano Albertazzi
How can Vertiv’s prefabricated solutions be customized for diverse geographic markets such as India?
Customization ensures solutions meet local regulatory, climate, and operational requirements.
Speaker: Giordano Albertazzi
What are Vertiv’s timelines and investment plans for expanding capacity and presence in the Indian market?
Clear timelines help partners and customers align their own rollout strategies.
Speaker: Giordano Albertazzi
What role will AI itself play in optimizing data‑center infrastructure management and operations?
AI‑driven management could improve efficiency, predictive maintenance, and resource allocation.
Speaker: Giordano Albertazzi
What are the primary failure modes and reliability concerns for ultra‑dense AI pods, and how can they be mitigated?
Identifying failure modes is essential for designing robust, high‑availability systems.
Speaker: Giordano Albertazzi
How does the Verti Smart Run solution achieve an 85% reduction in deployment time, and what are its key components?
Understanding the solution’s mechanisms can guide replication in other projects.
Speaker: Giordano Albertazzi
What challenges arise when integrating power and thermal infrastructure into a single modular block, and how are they addressed?
Integrated blocks promise speed but may introduce design and maintenance complexities.
Speaker: Speaker 3
What are the comparative advantages of liquid cooling versus traditional air cooling for AI workloads at extreme densities?
Cooling choice directly impacts performance, energy use, and rack density limits.
Speaker: Giordano Albertazzi
How can data‑center operators measure and improve the efficiency of heat extraction, rejection, and reuse processes?
Efficient thermal management reduces operational costs and environmental impact.
Speaker: Giordano Albertazzi
What future trends in AI workload density are expected, and how will they influence next‑generation data‑center design?
Anticipating workload growth guides long‑term infrastructure planning.
Speaker: Giordano Albertazzi

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.