Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi
20 Feb 2026 12:00h - 13:00h
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi
Summary
The session featured Giordano Albertazzi, CEO of Vertiv, who emphasized that while AI discussions often highlight software capabilities, the physical data-center infrastructure that powers AI is equally critical [9-15]. He outlined Vertiv’s role in delivering power, cooling and overall data-center infrastructure, noting the company’s evolution from Emerson to an independent, publicly-traded firm with deep industry expertise [20-22].
Albertazzi explained that the rapid adoption of GPUs has driven extreme rack densification, with power per rack rising from 10-20 kW to potentially 150 kW or even a megawatt, fundamentally altering data-center design [30-33]. This shift requires a coordinated “body” of power-train and thermal systems rather than isolated components, and Vertiv aims to provide fully orchestrated solutions that integrate power, cooling and heat-reuse [45-48][70-76]. He highlighted the move from individual servers to AI pods and ultimately to treating an entire data center as a single compute unit capable of gigawatt-scale workloads [78-84].
To meet the speed and scale demands, Vertiv offers prefabricated, factory-tested modules-such as the “OneCore” system-that can cut deployment time by up to 85 % compared with traditional builds [91-97][118-124]. Speaker 3 contrasted conventional sequential construction with prefabricated modular approaches, noting that Vertiv’s solutions combine the benefits of both methods [118-122]. The company’s close collaboration with NVIDIA enables reference designs that match AI workloads and accelerates market adoption [54-58].
Albertazzi stressed India’s strategic importance because of abundant power, favorable demographics and existing Vertiv presence, and announced plans to expand capacity and invest further in the region [52-53][98-108][110-112]. He noted that faster, larger deployments create challenges in time and scale, which Vertiv addresses through integrated, resilient designs [49-51][89-90]. He also described Vertiv’s “future-resilient” architecture that can evolve as AI power densities increase [88-90]. Concluding with optimism, he asserted that the evolving infrastructure will sustain AI growth worldwide and that Vertiv is positioned to lead this transformation [113-115].
Keypoints
Major discussion points
– The physical infrastructure (power and cooling) is the foundation that makes AI possible.
Albertazzi stresses that the “very important physical part of AI… makes AI actually possible” and that Vertiv’s role is to supply the best power-train and thermal chain for AI workloads [13-18][30-34].
– AI workloads are driving extreme densification, requiring new power-density and voltage architectures.
He notes that racks are moving from 10-20 kW to 30-150 kW and even a megawatt per rack, and that the industry is migrating toward 800-volt DC power to handle this density [30-33][66-67].
– Vertiv is promoting modular, pre-engineered solutions (OneCore/OneVert) to speed up deployment and cut labor.
The “fully pre-engineered, defined data center” (OneVert) and “repeatable converged infrastructure” that can scale from 12.5 MW to gigawatts are highlighted, along with prefabrication that can reduce build time by up to 85 % [60-62][86-88][96-97].
– India is positioned as a strategic hub for AI data-center growth.
Albertazzi points to India’s abundant power, favorable demographics, and existing Vertiv presence, stating the company will “invest… expand capacity” and sees the country as a global AI hub [98-108].
– Partnership with NVIDIA and a shift from server-centric to pod-/data-center-as-a-computer architectures.
He praises the collaboration with NVIDIA on reference designs and explains that the unit of compute is evolving from individual servers to AI pods and ultimately to an entire data center operating as a single computer [54-58][78-84].
Overall purpose / goal
The presentation aims to convince the audience that robust, high-density power and cooling infrastructure is the critical enabler for the AI revolution, to showcase Vertiv’s innovative, modular solutions (especially OneCore/OneVert) that can meet these demands quickly and at scale, and to underline the company’s strategic focus on India as a growth market while highlighting its partnership with NVIDIA.
Overall tone
The tone is consistently upbeat, confident, and promotional. Albertazzi begins with enthusiasm about AI’s possibilities, moves into technical detail with authority, then shifts to an optimistic, forward-looking stance when discussing India and future investments. Throughout, the language remains positive (“thrilled,” “optimistic,” “excited”) and never turns critical or defensive. No major tonal shift occurs; the optimism intensifies toward the end as he emphasizes market opportunities and partnerships.
Speakers
– Speaker 1
– Role/Title: Moderator / event host who introduces speakers [S4][S6]
– Area of expertise:
– Speaker 3
– Role/Title:
– Area of expertise:
– Giordano Albertazzi
– Role/Title: Chief Executive Officer, Vertiv (Representative from Vertiv) [S7]
– Area of expertise: Digital infrastructure solutions for data centers, AI-related power and cooling systems
Additional speakers:
(none)
Introduction (Speaker 1) – Speaker 1 opened the session by introducing Mr Giordano Albertazzi, chief executive officer of Vertiv, a global provider of digital-infrastructure solutions for data centres and communication networks, and noted Vertiv’s ambition to accelerate innovation and support critical applications worldwide [1-3].
Physical layer of AI (Albertazzi) – Albertazzi observes that most AI conversations celebrate what AI can do, especially in India, while the “very important physical part of AI… makes AI actually possible” is often ignored - a point he makes early in his remarks about the need for power, cooling and overall data-centre infrastructure [9-15].
Vertiv background (Albertazzi) – He outlines Vertiv’s heritage: originally part of Emerson Electric, the company has been an independent, publicly-traded entity for almost a decade and brings decades of expertise in delivering the physical layer that enables the rapidly evolving AI-IT stack [18-22].
Extreme densification (Albertazzi) – Rack power density has jumped from the historic 10-20 kW range to 30-150 kW today, with future designs envisaging up to 1 MW per rack, fundamentally altering data-centre design and power-heat management requirements [30-34].
Brain-body analogy & orchestrated infrastructure (Albertazzi) – Using a biological analogy, Albertazzi explains that just as the brain needs a body, the AI “brain” (the IT stack) requires a well-orchestrated “body” of power-train and thermal systems; the power chain must move from grid to chip, and the thermal chain must handle heat extraction, rejection and reuse - all as an interoperable whole [38-48][70-76]. To accommodate the higher densities, Vertiv is transitioning to 800-V DC distribution, which better supports the increased power loads [66-76].
Shift in compute unit (Albertazzi) – He notes that the basic compute unit is shifting from the traditional server to an AI pod, and ultimately to the entire data centre operating as a single computer capable of gigawatt-scale workloads [78-84].
Vertiv’s modular, pre-engineered solution (Albertazzi) – Vertiv showcases a modular data-centre solution – referred to as “one vertigo, one core” (OneVert/OneCore) – that provides a repeatable, converged infrastructure and can be scaled from 12.5 MW up to gigawatt capacities, delivering a “future-resilient” platform [60-62][86-88].
Prefabrication advantage (Albertazzi) – The VertiSmart Run prefabrication methodology can cut deployment timelines by roughly 85 %, delivering an order-of-magnitude speed-up compared with traditional, labour-intensive builds [91-97].
India as a strategic AI hub (Albertazzi) – Albertazzi highlights India’s abundant power supply, favourable demographics, and Vertiv’s long-standing presence as reasons to view the country as a global AI centre, and he announces plans to expand capacity and increase investment in the Indian market [52-53][98-108][110-112].
Collaboration with NVIDIA (Albertazzi) – Joint reference designs with NVIDIA are being co-developed to align infrastructure with AI workloads, positioning the partnership as a driver of market leadership and accelerated adoption of AI-optimised data centres [54-58].
Future-resilient design (Albertazzi) – He stresses the need for “future-resilient” designs that can withstand rapid deployment pressures while maintaining reliability, noting that Vertiv’s integrated approach addresses both time-to-market and scale constraints [49-51][89-90].
Closing (Speaker 1 & Albertazzi) – In closing, Albertazzi expresses strong optimism about the role of robust infrastructure in sustaining AI’s growth, thanks the audience for their attention, and is acknowledged by the moderator for his impactful address [113-115][117].
Well, ladies and gentlemen, now it’s my pleasure to invite our next speaker, Mr. Giordano Albertazzi, who is the chief executive officer of Vertiv, a global company that provides digital infrastructure solutions for data centers, communication networks. Under his leadership, Vertiv is advancing its role as a global industry leader by accelerating innovation, strengthening technology leadership, and enabling the digital infrastructure that powers critical applications worldwide. Ladies and gentlemen, please welcome Mr. Giordano Albertazzi.
Thank you very much. The clicker? Oh, yeah, here. Better with the clicker. Good afternoon, everyone. And it’s absolutely a pleasure and an honor being on this stage where so many distinguished presenters. In the last two days, I’ve had the opportunity to talk about AI. An astonishing thing to me is that the majority of the AI conversations, as it should be, are about what AI can do. Very interesting presentation, just finished, tells about all the beautiful things that AI can do and particularly what AI can do here in India. But when we talk AI, we also talk about data centers. But let me go then to the physical part of AI, not just what AI can do for us.
There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s that physical part that makes AI actually possible. So I’ll talk about the physical part today. I will talk about the power. The cooling, the data center infrastructure. Vertiv and myself, with Vertiv, have been in the industry for decades. Well, Vertiv longer than me. It used to be part of Emerson, Emerson Electric, and we are almost 10 years as an independent company now publicly traded in New York. But what we do is really make sure that that physical part is provided with the best technology that supports the continuous evolution of the AI IT stack as those rapidly, almost exponentially, and I’m talking almost exponentially from a mathematical standpoint, evolve.
And it’s no easy task, a task that we would do very well because we know the space a lot. We have a lot of innovation. But there are several dimensions. to this. One is the extreme densification. Now, we all know what GPUs are. Probably two years ago, majority of people didn’t have any clue about what a GPU is. But now, GPU, NVIDIA is absolutely central to everything, all the conversation about AI. Well, that phenomenal evolution from a technology standpoint is changing the DNA of a data center. What used to be a rack with IT inside 10, 15, 20 kilowatt per rack is rapidly becoming more dense and with more power and heat to dissipate in it. This is going to 30, 50, 150 kilowatts per rack all the way in the possible future.
One megawatt per rack. That’s a lot of power in a single rack. The design of a data center is changing dramatically. As this design changes, of course, also the technology that supports it needs to change. But let me go back to AI, artificial intelligence. Let me go back to, and let me draw a parallel. Human intelligence. Human intelligence happen in the brain. But the brain doesn’t survive without a body. What we are, what we do at Vertiv, make that body, provide that technology for that body so that the brain can function, and that brain is the IT stack. But not only that brain can function, but also can produce intelligence. And that’s what an AI does.
That’s what an AI factory, an AI data center is doing. But just like the body, historically, data centers and data center engineering was viewed as disparate systems coming together. Now, we cannot think about a human body or anybody as individual parts, a chiller, a liquid cooling unit, an interruptible power supply, or whatever else in the powertrain or thermal chain you can think of. Everything needs to be orchestrated. Everything needs to be interoperable. Everything must be thought of as one thing. And that’s what we do in a world that is extraordinarily challenging, but it’s a challenge that we, of course, respond to very successfully, challenging in terms of time of deployment and in terms of scale of deployment.
Okay. Data centers need to be. developed faster and faster and are becoming bigger and bigger. You heard that. It is about data centers. India is certainly privileged from an AI standpoint also because there is a lot of power available that can be harnessed for more and more powerful and larger data centers. Now, as that happens, again, if you think, go back to my analogy of the body, you think about a system, you think about everything that is the body of artificial intelligence, then it is about changing the way we build that body from one piece at a time with a lot of activity happening on site, laborious, hard from a quality standpoint, to most integrated at factory level and deployment.
As NVIDIA continues to lead the world in terms of technology, in terms of IT stack, but also in terms of thought process for the infrastructure. And it’s something that we do a lot together. Well, then it is not just about the infrastructure and the speed and the size and the scale. It’s also about optimizing the infrastructure with reference designs that exactly target that type of application. So we, of course, are thrilled and always honored to partner with NVIDIA in this adventure and venture and lead the market in this respect. So here you have an example of what we call one vertigo, one core. There’s an example of a fully pre -engineered, defined data center. But when we talk about the body, the body of AI, the data center, then let’s talk very simply.
We talk about three fundamental, fundamental elements of that body. One is the powertrain. So everything that goes from the grid, if you will, for your utility, takes that power all the way to the chip. That power infrastructure is changing, is evolving as the power density changes. And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure. I’m going technical on you. Some of you I know are very technical, so I’m not afraid about that. But I will not go deep. So everything you see on the left side of this is exactly a representation of that powertrain. So bring the energy, take that energy to the chip. Then the chip and all the electronic components in a server generate heat.
And that heat can be very dense. And require very, very advanced… cooling mechanisms. And that’s the beginning of what we like to call a thermal chain that starts, and it’s what you see on the right side of this chart from the chip all the way to the heat extraction, the heat then rejection, or even more importantly, and more extensively so, is the heat reuse. So this is the system, the fundamental systems of this body. But again, it’s not just about the components of the system, it’s how the entire system works. And more and more, we see that when we think about the AI IT infrastructure, what used to be thought of as a server at a time is becoming really an AI, AI.
pod, an AI unit at a time. The unit of compute is no more the server. It is the pod. Unit of compute is not even the pod. It’s the entire data center operating as one single computer. A unit of compute that can go all the way to gigawatts. So it is about making sure, and I believe we do it very well, that I say uniquely well, but of course I root for ourselves. It is about making that infrastructure available at scale and in a very easy modular to deploy fashion. And that’s what we do. So a repeatable converged infrastructure major. So we have a lot of building blocks that can go from a 12 .5 megawatts all the way to gigawatts, all the way to gigawatts.
So clearly it’s not just about building that infrastructure, but that infrastructure over time needs to be, like we like to say, future resilient. Some people, like myself, have been in the industry of data center for quite some time, and it’s fascinating the speed at which things happen. And this speed is also enabled by new solutions that make prefabricated and very fast to deploy part of data centers that used to be very, very laborious. Take a data. It’s empty when the building is new. You have to fill it with power, with cooling, with cables. You have to put the racks. very laborious and time consuming. Time to token is of the essence. A prefabrication, for example, with what we do with Verti Smart Run, reduces the time to deploy almost 85%, almost an order of magnitude.
So the industry is changing, not only in scale and in density, but also in the way things are done and deployed. Let me take a different angle now and focus on India. India clearly central to the AI evolution revolution and central certainly in terms of the infrastructure that is being built and the infrastructure that will be built in the future and in the coming years. This infrastructure and the speed at which this infrastructure will be built, of course, will depend, as I was saying, by the ability of the likes of Vertiv, but certainly Vertiv, given our prominent position also in India, to really enable this at scale and at speed in the ways that I explained.
So Vertiv in India has a long tradition. We’ve been here for decades. We have what I believe is an awesome team and awesome partnerships. And now this forum, these sessions, these few days convinced me even more of the importance of India as a place. A place to invest. And invest we will. We are expanding our capacity and will continue to expand capacity. We see India certainly as an extremely promising market. as a hub for AI, not only for India, but globally. So it has got the power availability. Certainly India has got the right demographics. So I couldn’t be more excited about the business in India. I couldn’t be more excited about what we’re doing in India and what our partners are doing in India.
So with that, I’m extremely optimistic. I’m a big optimist about what AI will bring, as we heard. And with that, thank you very much. Thank you.
Thank you so much, Mr. El -Battazi, for your impactful address and also for…
Data centers have, up until now, been usually constructed in one of two ways. Traditional data center build follows a sequential process, materials and equipment arriving individually on site, with the build progressing from the ground up. Alternatively, prefabricated modular construction can offer many advantages, such as quicker deployments and risk reduction. Vertiv offers many solutions in this space. However, in the age of increasing IT loads powered by artificial intelligence, there’s another option that combines the advantages of both. Vertiv OneCore. Vertiv power and thermal infrastructure building blocks are inserted into a brand new Vertiv -supplied steel building shell, or an existing building. Infrastructure building blocks are made in controlled factory environments and tested before construction. The system is also equipped with a new, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient, more efficient,
So very much working on that. And on your question from an innovation perspective, well, we all know the hype cycle. And that’s tough. Because it always means that there is disruption. And there is a …
EventHow do you execute on it? How do you start delivering the outcome that I think we all are looking for? So that’s the kind of space that I come from. And a privilege to be at Google, who allows us to k…
EventAnd it’s going to be a system approach. System. Systems. Think systems. we as an industry have thought boxes for too long. We think, I got this compute box or that compute box. It’s now a system. It’s…
EventThere is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s that physical part that makes AI actually possible. So I’ll talk about the physic…
Event_reporting“And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale…
EventResearchers from UC Berkeley’s Haas School of Businessexaminedhow AI shapes working habits inside a mid-sized technology firm, and the outcome raised concerns about employee well-being. Workers embrac…
UpdatesThe panelists emphasized that true AI leadership requires alignment across four key pillars: silicon, software, systems, and policy. India was positioned as well-suited for this technological shift du…
EventDua argued that India could become a global compute hub, potentially processing 40-50% of the world’s data by leveraging cost advantages and strategic positioning. Unlike latency-sensitive application…
EventData sovereignty policies requiring local data storage are essential to drive domestic data center investment and capitalize on India’s data generation India has significant potential to become a glo…
EventIndia positions itself as a central hub of technology talent, leveraging a strong IT background and dynamic startup ecosystem. The country is adopting new AI technologies at an unprecedented speed, dr…
Event3. **Processing Architecture Shift**: The transition from CPU-based to GPU-based computing, fundamentally altering how computational power is organized and accessed. Jovan Kurbalija: on digitalizatio…
EventNvidia, a semiconductor company in California,has revealed plans for partnerships with major Indian corporations, Reliance Industries and Tata Group. The aim is to collaborate on projects involving cl…
UpdatesThe two US tech firms, NVIDIA and Intel,have announceda major partnership to develop multiple generations of AI infrastructure and personal computing products. They say that the collaboration will mer…
Updates“Speaker 1 introduced Giordano Albertazzi as chief executive officer of Vertiv, a global provider of digital‑infrastructure solutions for data centres and communication networks.”
The knowledge base identifies Giordano Albertazzi as CEO of Vertiv discussing critical physical infrastructure for AI, confirming his role and the company’s focus on data-centre solutions [S8] and [S7].
“Albertazzi observes that most AI conversations celebrate what AI can do, especially in India, while the “very important physical part of AI… makes AI actually possible” is often ignored.”
Albertazzi’s emphasis on the overlooked physical infrastructure that enables AI is directly stated in the knowledge base [S8].
“Rack power density has jumped from the historic 10‑20 kW range to 30‑150 kW today, with future designs envisaging up to 1 MW per rack.”
The source notes the evolution from a few kilowatts per rack to 10-30 kW and cites current deployments in India at around 80 kW per rack, confirming the upward trend but not the 1 MW projection [S50].
“India’s abundant power supply, favourable demographics, and Vertiv’s long‑standing presence make the country a global AI centre; Vertiv plans to expand capacity and increase investment in the Indian market.”
Albertazzi’s remarks about India as a strategic AI hub and Vertiv’s activities there are reflected in the knowledge base, which highlights his focus on India’s physical-layer potential and ongoing high-density rack deployments [S8] and [S50]; broader commentary on India’s AI opportunities appears in [S47].
The speakers converge on two main ideas: (1) modular, prefabricated construction (Smart Run, OneCore) is a key accelerator for AI data‑center deployment, and (2) the physical infrastructure must be conceived as a unified, interoperable system rather than isolated components. These points reflect a clear consensus on the technical and operational pathways needed to meet the rapid growth of AI workloads.
Moderate to strong consensus on infrastructure integration and rapid‑deployment solutions, indicating that participants share a common vision for how the data‑center ecosystem should evolve to support AI. This alignment suggests that industry stakeholders are likely to collaborate on standardising modular designs and integrated power‑thermal architectures.
The discussion shows strong alignment among the speakers on the need for integrated, high‑density AI data‑center infrastructure and the value of modular, prefabricated construction. The only variation is in the specific Vertiv product highlighted (Smart Run vs OneCore). No substantive contradictions were identified.
Low – the participants largely concur on the challenges and objectives, with only minor differences in preferred implementation pathways, suggesting a cohesive industry stance on accelerating AI‑driven data‑center deployment.
The discussion was driven forward by a series of insightful remarks that reframed AI from a purely software narrative to a hardware‑centric challenge. Giordano Albertazzi’s analogies, quantitative density figures, and emphasis on modular, rapid‑deployment infrastructure highlighted the urgency and complexity of scaling AI workloads. These points set the stage for Speaker 3’s introduction of Vertiv OneCore, which served as a pivotal moment by presenting a concrete, hybrid solution that directly addressed the earlier identified challenges. Collectively, the comments shifted the dialogue from abstract AI potential to tangible infrastructure strategies, deepening the technical depth and aligning the audience around actionable pathways for building the next generation of AI‑ready data centers, especially in high‑growth regions like India.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

