Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

20 Feb 2026 12:00h - 13:00h

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi

Session at a glance

Summary

The discussion featured Giordano Albertazzi, CEO of Vertiv, addressing the critical physical infrastructure requirements that enable artificial intelligence operations in data centers. Albertazzi emphasized that while most AI conversations focus on capabilities and applications, the physical infrastructure supporting AI is often overlooked despite being essential for AI functionality. He explained that Vertiv, formerly part of Emerson Electric and now an independent publicly traded company, specializes in providing the power, cooling, and data center infrastructure that makes AI possible.


Albertazzi highlighted the dramatic transformation occurring in data center design due to AI’s extreme densification requirements. Traditional data center racks that previously handled 10-20 kilowatts are rapidly evolving to support 30-150 kilowatts per rack, with future possibilities reaching one megawatt per rack. He drew an analogy between human intelligence requiring a body to support the brain, explaining that Vertiv creates the “body” infrastructure that enables the AI “brain” to function and produce intelligence.


The speaker stressed that modern data centers can no longer be viewed as disparate systems but must be orchestrated as integrated, interoperable units functioning as one system. He discussed the evolution from individual server-based computing to AI pods and entire data centers operating as single computers capable of gigawatt-scale operations. Albertazzi emphasized the importance of prefabricated, modular solutions that can reduce deployment time by 85%, addressing the industry’s need for faster, larger-scale implementations.


Regarding India specifically, Albertazzi expressed strong optimism about the country’s central role in AI infrastructure development, citing available power resources, favorable demographics, and Vertiv’s long-standing presence and partnerships in the region. He concluded by reaffirming Vertiv’s commitment to expanding capacity in India and positioning the country as a global AI hub.


Keypoints

Major Discussion Points:


The Physical Infrastructure Behind AI: Albertazzi emphasizes that while most AI discussions focus on capabilities, the physical infrastructure (power, cooling, data centers) that makes AI possible is often overlooked but critically important.


Extreme Densification and Evolving Data Center Design: The shift from traditional IT to GPU-based AI is dramatically changing data center requirements, with power density per rack increasing from 10-20 kilowatts to potentially one megawatt per rack, necessitating completely new infrastructure approaches.


Systems Integration and Orchestration: Moving away from viewing data centers as disparate components to treating them as integrated, orchestrated systems – using the analogy of AI as a “brain” that needs a coordinated “body” (infrastructure) to function effectively.


Prefabrication and Speed of Deployment: The industry is shifting toward factory-integrated, modular solutions that can reduce deployment time by 85%, addressing the urgent need for faster data center construction to meet AI demand.


India as a Strategic AI Infrastructure Hub: Positioning India as central to global AI infrastructure development due to available power resources, favorable demographics, and Vertiv’s commitment to expanding capacity and investment in the region.


Overall Purpose:


The discussion aims to highlight the critical but often underappreciated physical infrastructure requirements for AI development, while positioning Vertiv as a key enabler of AI growth through innovative data center solutions, particularly emphasizing opportunities in the Indian market.


Overall Tone:


The tone is consistently optimistic and forward-looking throughout. Albertazzi maintains an enthusiastic, confident demeanor when discussing both the technical challenges and business opportunities. The presentation has an educational quality as he explains complex infrastructure concepts, but remains promotional in highlighting Vertiv’s capabilities and future prospects, especially regarding India’s potential in the AI infrastructure space.


Speakers

Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned


Giordano Albertazzi: Role/Title: Chief Executive Officer of Vertiv; Area of expertise: Digital infrastructure solutions for data centers, communication networks, AI data center infrastructure, power and cooling systems


Video presentation: Role/Title: Not mentioned; Area of expertise: Data center construction methods and prefabricated modular solutions


Additional speakers:


None identified beyond the speakers names list.


Full session report

Summary: The Physical Infrastructure Imperative for Artificial Intelligence


Giordano Albertazzi, Chief Executive Officer of Vertiv, delivered a presentation addressing the critical physical infrastructure requirements that underpin artificial intelligence operations. Albertazzi focused attention on the fundamental hardware infrastructure that makes AI functionality possible, noting that while most AI conversations appropriately focus on applications and capabilities, there exists a crucial blind spot regarding the physical infrastructure that enables these technologies.


The Foundation Argument: Physical Infrastructure as AI’s Enabler


Albertazzi emphasized that the physical component—encompassing power systems, cooling mechanisms, and data centre infrastructure—should not be overlooked because it represents the foundational layer that makes AI actually possible. Drawing from Vertiv’s extensive industry experience, he provided context for his company’s expertise in this domain. Vertiv, formerly part of Emerson Electric and now operating as an independent publicly traded company on the New York Stock Exchange for nearly a decade, specializes in providing technological infrastructure that supports the continuous evolution of AI IT stacks.


The Extreme Densification Challenge


A central theme of Albertazzi’s presentation focused on the dramatic transformation occurring in data centre design due to AI’s extreme densification requirements. He explained that traditional data centre racks previously handled 10-20 kilowatts per rack but are rapidly evolving to support 30-50 kilowatts, with projections reaching 150 kilowatts per rack and potential future capabilities of one megawatt per rack. This represents an extraordinary increase in power density that fundamentally alters data centre infrastructure.


This densification challenge stems primarily from the proliferation of Graphics Processing Units (GPUs), which Albertazzi noted have transformed from components that most people probably didn’t know about two years ago to becoming central to all AI conversations, largely due to NVIDIA’s prominence in the field.


The Biological Metaphor: AI as Brain and Body


Albertazzi employed a biological metaphor throughout his presentation, drawing a parallel between artificial intelligence and human intelligence. He explained that while human intelligence occurs in the brain, the brain cannot survive without a body to support it. In this analogy, Vertiv creates the “body”—the technological infrastructure—that enables the AI “brain” (the IT stack) to function and produce intelligence.


This framework helped establish the concept that modern AI data centres, which Albertazzi termed “AI factories,” must be understood as integrated organisms rather than collections of disparate components.


Systems Integration and Orchestration


Building upon this concept, Albertazzi argued that the industry must abandon the historical approach of viewing data centres as disparate systems coming together. Instead, everything must be orchestrated and interoperable, functioning as one integrated system. This integration imperative becomes particularly challenging given the extraordinary demands for speed and scale in data centre deployment, which Albertazzi described as “extraordinarily challenging” in terms of both “time of deployment and scale of deployment.”


Technological Evolution and Architecture Changes


Albertazzi provided insights into specific technological changes occurring within data centre infrastructure. He explained that power infrastructure is migrating towards 800-volt DC power systems, noting “I’m going technical on you… But I will not go deep.” This evolution encompasses what he termed the “powertrain”—everything that takes power from the utility grid and delivers it to individual chips.


Complementing the power infrastructure is what Albertazzi described as the “thermal chain,” which manages the substantial heat generated by dense electronic components in servers, extending from individual chips to heat extraction and rejection systems.


The Evolution of Computing Units


Albertazzi explained that the traditional concept of individual servers as computing units is becoming obsolete in AI applications. Instead, the industry is moving towards AI pods as the basic unit of compute, with further evolution towards entire data centres operating as one single computer. This architectural evolution necessitates infrastructure solutions that can scale from individual components to massive integrated systems, with Vertiv providing scalable building blocks that can range from 12.5 megawatts to gigawatt-scale implementations.


Prefabrication and Deployment Innovation


Addressing the critical challenge of deployment speed, Albertazzi emphasized the industry’s shift towards prefabricated and modular construction approaches. He contrasted traditional data centre construction—which involves sequential processes with materials and equipment arriving individually on site—with prefabricated solutions that offer significant advantages in deployment speed and risk reduction.


Albertazzi cited specific metrics showing that Vertiv Smart Run prefabrication solutions can reduce deployment time by approximately 85%. He referenced the “One Vertigo, One Core” concept as an example of pre-engineered data centers, and noted that a video presentation demonstrated the differences between traditional versus prefabricated modular construction approaches.


Strategic Partnerships and Industry Collaboration


Albertazzi highlighted the importance of strategic partnerships, particularly emphasizing Vertiv’s collaboration with NVIDIA. He expressed being “thrilled and always honored to partner with NVIDIA,” noting the partnership’s importance as NVIDIA continues to lead in IT stack technology and infrastructure. These collaborations enable the development of what Albertazzi termed “future resilient” infrastructure solutions designed to accommodate the rapid pace of technological change in the AI sector.


India as a Strategic AI Infrastructure Hub


Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particularly attractive for large-scale AI infrastructure investment. Primary among these advantages is India’s substantial power availability, which can be harnessed for increasingly powerful and larger data centres. He also highlighted India’s favorable demographics as supporting the country’s potential as a global AI hub.


Vertiv’s commitment to the Indian market is substantial and long-standing. Albertazzi noted that the company has maintained a presence in India for decades, building what he described as an “awesome team and awesome partnerships.” The company’s investment strategy includes expanding capacity within India, reflecting confidence in the market’s growth potential.


Conclusion


Albertazzi’s presentation highlighted the critical importance of physical infrastructure in enabling AI capabilities. Having been in the data center industry “for quite some time,” he found the speed of change “fascinating” and emphasized that while AI capabilities capture attention, the physical infrastructure enabling these capabilities requires equal consideration and investment. His message reinforced that as AI continues to evolve rapidly, the infrastructure supporting it must evolve with equal sophistication and speed.


Session transcript

Announcer

Well, ladies and gentlemen, now it’s my pleasure to invite our next speaker, Mr. Giordano Albertazzi, who is the chief executive officer of Vertiv, a global company that provides digital infrastructure solutions for data centers, communication networks. Under his leadership, Vertiv is advancing its role as a global industry leader by accelerating innovation, strengthening technology leadership, and enabling the digital infrastructure that powers critical applications worldwide. Ladies and gentlemen, please welcome Mr. Giordano Albertazzi.

Giordano Albertazzi

Thank you very much. The clicker? Oh, yeah, here. Better with the clicker. Good afternoon, everyone. And it’s absolutely a pleasure and an honor being on this stage where so many distinguished presenters. In the last two days, I’ve had the opportunity to talk about AI. An astonishing thing to me is that the majority of the AI conversations, as it should be, are about what AI can do. Very interesting presentation, just finished, tells about all the beautiful things that AI can do and particularly what AI can do here in India. But when we talk AI, we also talk about data centers. But let me go then to the physical part of AI, not just what AI can do for us.

There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s that physical part that makes AI actually possible. So I’ll talk about the physical part today. I will talk about the power. The cooling, the data center infrastructure. Vertiv and myself, with Vertiv, have been in the industry for decades. Well, Vertiv longer than me. It used to be part of Emerson, Emerson Electric, and we are almost 10 years as an independent company now publicly traded in New York. But what we do is really make sure that that physical part is provided with the best technology that supports the continuous evolution of the AI IT stack as those rapidly, almost exponentially, and I’m talking almost exponentially from a mathematical standpoint, evolve.

And it’s no easy task, a task that we would do very well because we know the space a lot. We have a lot of innovation. But there are several dimensions. to this. One is the extreme densification. Now, we all know what GPUs are. Probably two years ago, majority of people didn’t have any clue about what a GPU is. But now, GPU, NVIDIA is absolutely central to everything, all the conversation about AI. Well, that phenomenal evolution from a technology standpoint is changing the DNA of a data center. What used to be a rack with IT inside 10, 15, 20 kilowatt per rack is rapidly becoming more dense and with more power and heat to dissipate in it.

This is going to 30, 50, 150 kilowatts per rack all the way in the possible future. One megawatt per rack. That’s a lot of power in a single rack. The design of a data center is changing dramatically. As this design changes, of course, also the technology that supports it needs to change. But let me go back to AI, artificial intelligence. Let me go back to, and let me draw a parallel. Human intelligence. Human intelligence happen in the brain. But the brain doesn’t survive without a body. What we are, what we do at Vertiv, make that body, provide that technology for that body so that the brain can function, and that brain is the IT stack.

But not only that brain can function, but also can produce intelligence. And that’s what an AI does. That’s what an AI factory, an AI data center is doing. But just like the body, historically, data centers and data center engineering was viewed as disparate systems coming together. Now, we cannot think about a human body or anybody as individual parts, a chiller, a liquid cooling unit, an interruptible power supply, or whatever else in the powertrain or thermal chain you can think of. Everything needs to be orchestrated. Everything needs to be interoperable. Everything must be thought of as one thing. And that’s what we do in a world that is extraordinarily challenging, but it’s a challenge that we, of course, respond to very successfully, challenging in terms of time of deployment and in terms of scale of deployment.

Okay. Data centers need to be. developed faster and faster and are becoming bigger and bigger. You heard that. It is about data centers. India is certainly privileged from an AI standpoint also because there is a lot of power available that can be harnessed for more and more powerful and larger data centers. Now, as that happens, again, if you think, go back to my analogy of the body, you think about a system, you think about everything that is the body of artificial intelligence, then it is about changing the way we build that body from one piece at a time with a lot of activity happening on site, laborious, hard from a quality standpoint, to most integrated at factory level and deployment.

As NVIDIA continues to lead the world in terms of technology, in terms of IT stack, but also in terms of thought process for the infrastructure. And it’s something that we do a lot together. Well, then it is not just about the infrastructure and the speed and the size and the scale. It’s also about optimizing the infrastructure with reference designs that exactly target that type of application. So we, of course, are thrilled and always honored to partner with NVIDIA in this adventure and venture and lead the market in this respect. So here you have an example of what we call one vertigo, one core. There’s an example of a fully pre -engineered, defined data center. But when we talk about the body, the body of AI, the data center, then let’s talk very simply.

We talk about three fundamental, fundamental elements of that body. One is the powertrain. So everything that goes from the grid, if you will, for your utility, takes that power all the way to the chip. That power infrastructure is changing, is evolving as the power density changes. And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure. I’m going technical on you. Some of you I know are very technical, so I’m not afraid about that. But I will not go deep. So everything you see on the left side of this is exactly a representation of that powertrain. So bring the energy, take that energy to the chip. Then the chip and all the electronic components in a server generate heat.

And that heat can be very dense. And require very, very advanced… cooling mechanisms. And that’s the beginning of what we like to call a thermal chain that starts, and it’s what you see on the right side of this chart from the chip all the way to the heat extraction, the heat then rejection, or even more importantly, and more extensively so, is the heat reuse. So this is the system, the fundamental systems of this body. But again, it’s not just about the components of the system, it’s how the entire system works. And more and more, we see that when we think about the AI IT infrastructure, what used to be thought of as a server at a time is becoming really an AI, AI.

pod, an AI unit at a time. The unit of compute is no more the server. It is the pod. Unit of compute is not even the pod. It’s the entire data center operating as one single computer. A unit of compute that can go all the way to gigawatts. So it is about making sure, and I believe we do it very well, that I say uniquely well, but of course I root for ourselves. It is about making that infrastructure available at scale and in a very easy modular to deploy fashion. And that’s what we do. So a repeatable converged infrastructure major. So we have a lot of building blocks that can go from a 12 .5 megawatts all the way to gigawatts, all the way to gigawatts.

So clearly it’s not just about building that infrastructure, but that infrastructure over time needs to be, like we like to say, future resilient. Some people, like myself, have been in the industry of data center for quite some time, and it’s fascinating the speed at which things happen. And this speed is also enabled by new solutions that make prefabricated and very fast to deploy part of data centers that used to be very, very laborious. Take a data. It’s empty when the building is new. You have to fill it with power, with cooling, with cables. You have to put the racks. very laborious and time consuming. Time to token is of the essence. A prefabrication, for example, with what we do with Verti Smart Run, reduces the time to deploy almost 85%, almost an order of magnitude.

So the industry is changing, not only in scale and in density, but also in the way things are done and deployed. Let me take a different angle now and focus on India. India clearly central to the AI evolution revolution and central certainly in terms of the infrastructure that is being built and the infrastructure that will be built in the future and in the coming years. This infrastructure and the speed at which this infrastructure will be built, of course, will depend, as I was saying, by the ability of the likes of Vertiv, but certainly Vertiv, given our prominent position also in India, to really enable this at scale and at speed in the ways that I explained.

So Vertiv in India has a long tradition. We’ve been here for decades. We have what I believe is an awesome team and awesome partnerships. And now this forum, these sessions, these few days convinced me even more of the importance of India as a place. A place to invest. And invest we will. We are expanding our capacity and will continue to expand capacity. We see India certainly as an extremely promising market. as a hub for AI, not only for India, but globally. So it has got the power availability. Certainly India has got the right demographics. So I couldn’t be more excited about the business in India. I couldn’t be more excited about what we’re doing in India and what our partners are doing in India.

So with that, I’m extremely optimistic. I’m a big optimist about what AI will bring, as we heard. And with that, thank you very much. Thank you.

Announcer

Thank you so much, Mr. Albertazzi for your impactful address

Video presentation:

Data centers have, up until now, been usually constructed in one of two ways. Traditional data center build follows a sequential process, materials and equipment arriving individually on site, with the build progressing from the ground up. Alternatively, prefabricated modular construction can offer many advantages, such as quicker deployments and risk reduction. Vertiv offers many solutions in this space.

A

Announcer

Speech speed

125 words per minute

Speech length

84 words

Speech time

40 seconds

Event framing

Explanation

The announcer opens the session by welcoming the audience and introducing Mr. Giordano Albertazzi, setting the stage for the discussion on AI infrastructure.


Evidence

“Ladies and gentlemen, please welcome Mr. Giordano Albertazzi.” [1].


Major discussion point

Event framing


Topics

The enabling environment for digital development


G

Giordano Albertazzi

Speech speed

123 words per minute

Speech length

1705 words

Speech time

830 seconds

Physical part essential for AI operation

Explanation

Albertazzi stresses that the physical infrastructure is a crucial, often overlooked component that makes AI possible, emphasizing that AI cannot exist without it.


Evidence

“There is an important, very important physical part of AI that sometimes is overlooked.” [16]. “And it shouldn’t, because it’s that physical part that makes AI actually possible.” [17]. “But let me go then to the physical part of AI, not just what AI can do for us.” [18].


Major discussion point

Physical infrastructure as AI foundation


Topics

Artificial intelligence | The enabling environment for digital development


Need for integrated, orchestrated infrastructure

Explanation

He argues that AI data center components must be interoperable and orchestrated as a single system to support the evolving AI stack.


Evidence

“Everything needs to be interoperable.” [24]. “Everything needs to be orchestrated.” [25].


Major discussion point

Physical infrastructure as AI foundation


Topics

Artificial intelligence | The enabling environment for digital development


GPU-driven power density rising to 30‑150 kW per rack, future 1 MW

Explanation

Albertazzi describes the rapid increase in rack power density driven by GPUs, noting current levels of 30‑150 kW and projecting future racks reaching up to 1 MW.


Evidence

“This is going to 30, 50, 150 kilowatts per rack all the way in the possible future.” [42]. “One megawatt per rack.” [43]. “What used to be a rack with IT inside 10, 15, 20 kilowatt per rack is rapidly becoming more dense and with more power and heat to dissipate in it.” [44].


Major discussion point

Densification and architectural evolution


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


Migration to 800 V DC power and advanced cooling solutions

Explanation

He notes the shift toward 800‑volt DC power architectures and the need for sophisticated cooling to handle extreme densification.


Evidence

“And the current architectures are migrating towards, over time, what is an 800 -volt DC power infrastructure.” [50]. “And require very, very advanced… cooling mechanisms.” [54].


Major discussion point

Densification and architectural evolution


Topics

Artificial intelligence | Environmental impacts | The enabling environment for digital development


Prefabricated modules cut deployment time by ~85 %

Explanation

Albertazzi highlights that using prefabricated modules reduces data‑center deployment time dramatically, by roughly 85 %, accelerating AI infrastructure rollout.


Evidence

“A prefabrication, for example, with what we do with Verti Smart Run, reduces the time to deploy almost 85%, almost an order of magnitude.” [57].


Major discussion point

Accelerated deployment via prefabrication


Topics

The enabling environment for digital development | Artificial intelligence


Scalable, repeatable converged infrastructure from 12.5 MW to gigawatts

Explanation

He explains that Vertiv’s building blocks enable a modular, repeatable infrastructure that can scale from 12.5 MW up to gigawatt‑level capacities.


Evidence

“So we have a lot of building blocks that can go from a 12 .5 megawatts all the way to gigawatts, all the way to gigawatts.” [46].


Major discussion point

Accelerated deployment via prefabrication


Topics

The enabling environment for digital development | Artificial intelligence


India’s abundant power and favorable demographics make it a global AI hub

Explanation

Albertazzi points out that India’s large, available power supply and demographic advantages position it as a central hub for AI infrastructure worldwide.


Evidence

“Certainly India has got the right demographics.” [66]. “India is certainly privileged from an AI standpoint also because there is a lot of power available that can be harnessed for more and more powerful and larger data centers.” [67].


Major discussion point

India’s strategic role in AI infrastructure


Topics

Artificial intelligence | The enabling environment for digital development | Social and economic development


Vertiv’s long‑standing presence and optimism for Indian market

Explanation

He emphasizes Vertiv’s decades‑long history in India, its capacity expansion, and his optimism about the market’s future for AI infrastructure.


Evidence

“Vertiv in India has a long tradition.” [75]. “We see India certainly as an extremely promising market.” [70]. “I couldn’t be more excited about what we’re doing in India and what our partners are doing in India.” [73].


Major discussion point

India’s strategic role in AI infrastructure


Topics

Artificial intelligence | The enabling environment for digital development | Social and economic development


Collaboration with NVIDIA on reference designs and market leadership

Explanation

Albertazzi states that Vertiv partners with NVIDIA to develop reference designs that target AI workloads, positioning both companies as market leaders.


Evidence

“So we, of course, are thrilled and always honored to partner with NVIDIA in this adventure and venture and lead the market in this respect.” [82]. “As NVIDIA continues to lead the world in terms of technology, in terms of IT stack, but also in terms of thought process for the infrastructure.” [83].


Major discussion point

Partnerships and ecosystem


Topics

Artificial intelligence | The enabling environment for digital development


V

Video presentation

Speech speed

61 words per minute

Speech length

59 words

Speech time

57 seconds

Traditional sequential build vs prefabricated modular construction

Explanation

The video contrasts conventional data‑center construction, which follows a sequential, on‑site process, with prefabricated modular methods that promise quicker, lower‑risk deployments.


Evidence

“Traditional data center build follows a sequential process, materials and equipment arriving individually on site, with the build progressing from the ground up.” [85]. “Data centers have, up until now, been usually constructed in one of two ways.” [86].


Major discussion point

Construction approaches overview


Topics

The enabling environment for digital development | Environmental impacts


Vertiv provides modular solutions for faster, lower‑risk deployments

Explanation

The presentation notes that prefabricated modular construction, such as Vertiv’s solutions, can accelerate deployment and reduce risk compared with traditional builds.


Evidence

“Alternatively, prefabricated modular construction can offer many advantages, such as quicker deployments and risk reduction.” [58].


Major discussion point

Construction approaches overview


Topics

The enabling environment for digital development | Environmental impacts


Agreements

Agreement points

Prefabricated modular construction provides significant deployment advantages

Speakers

– Giordano Albertazzi
– Video presentation

Arguments

Prefabrication solutions can reduce deployment time by almost 85%, representing nearly an order of magnitude improvement


Prefabricated modular construction offers advantages like quicker deployments and risk reduction compared to traditional sequential building


Summary

Both speakers agree that prefabricated modular construction offers substantial benefits over traditional sequential building methods, particularly in terms of deployment speed and risk reduction. Albertazzi provides specific metrics showing 85% time reduction, while the video presentation emphasizes the general advantages of quicker deployments and risk reduction.


Topics

Information and communication technologies for development | The enabling environment for digital development


The event serves as an important platform for AI infrastructure knowledge sharing

Speakers

– Giordano Albertazzi
– Announcer

Arguments

AI requires substantial physical infrastructure including power, cooling, and data center systems that are often overlooked in AI discussions


The event provides a platform for discussing impactful developments in AI infrastructure


Summary

Both speakers acknowledge the significance of the forum for discussing AI infrastructure developments. Albertazzi uses the platform to highlight overlooked aspects of AI infrastructure, while the announcer formally recognizes the impactful nature of these discussions.


Topics

Artificial intelligence | Information and communication technologies for development


Similar viewpoints

Both speakers advocate for prefabricated modular construction as a superior approach to traditional data center building methods, emphasizing speed and efficiency benefits

Speakers

– Giordano Albertazzi
– Video presentation

Arguments

Prefabrication solutions can reduce deployment time by almost 85%, representing nearly an order of magnitude improvement


Prefabricated modular construction offers advantages like quicker deployments and risk reduction compared to traditional sequential building


Topics

Information and communication technologies for development | The enabling environment for digital development


Unexpected consensus

Strong emphasis on physical infrastructure in AI discussions

Speakers

– Giordano Albertazzi
– Announcer

Arguments

AI requires substantial physical infrastructure including power, cooling, and data center systems that are often overlooked in AI discussions


The event provides a platform for discussing impactful developments in AI infrastructure


Explanation

It’s somewhat unexpected that both the technical presenter and the event organizer (through the announcer) place such strong emphasis on the physical infrastructure aspects of AI, given that most AI discussions typically focus on software capabilities and applications. This consensus highlights a growing recognition of the critical importance of physical infrastructure in enabling AI advancement.


Topics

Artificial intelligence | Information and communication technologies for development


Overall assessment

Summary

The speakers demonstrate strong consensus on the importance of advanced infrastructure solutions for AI deployment, particularly around prefabricated modular construction methods and the critical role of physical infrastructure in AI development. There is also agreement on India’s strategic position in the global AI infrastructure landscape.


Consensus level

High level of consensus with complementary perspectives. The speakers approach the topic from different angles but arrive at similar conclusions about the importance of infrastructure innovation, deployment efficiency, and India’s role in AI development. This consensus suggests a mature understanding of AI infrastructure requirements and indicates strong alignment between industry practitioners and event organizers on key infrastructure priorities.


Differences

Different viewpoints

Unexpected differences

Overall assessment

Summary

No disagreements identified in the transcript


Disagreement level

This transcript contains a single presentation by Giordano Albertazzi about AI infrastructure requirements, with supportive remarks from an announcer and a complementary video presentation. All speakers are aligned in their messaging about the importance of physical infrastructure for AI, the benefits of prefabricated modular construction, and India’s strategic position in AI development. There are no opposing viewpoints, debates, or areas of disagreement present in this discussion format.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for prefabricated modular construction as a superior approach to traditional data center building methods, emphasizing speed and efficiency benefits

Speakers

– Giordano Albertazzi
– Video presentation

Arguments

Prefabrication solutions can reduce deployment time by almost 85%, representing nearly an order of magnitude improvement


Prefabricated modular construction offers advantages like quicker deployments and risk reduction compared to traditional sequential building


Topics

Information and communication technologies for development | The enabling environment for digital development


Takeaways

Key takeaways

AI’s physical infrastructure requirements are often overlooked but are critical for AI functionality, requiring substantial power, cooling, and data center systems


Data center power density is experiencing extreme growth from 10-20 kilowatts per rack to potentially one megawatt per rack, fundamentally changing data center design


Modern AI data centers must be designed as integrated, orchestrated systems rather than collections of individual components


The unit of compute has evolved from individual servers to AI pods and entire data centers operating as single computers at gigawatt scale


Prefabricated modular construction can reduce deployment time by approximately 85%, representing a critical advantage for meeting AI infrastructure demands


Power infrastructure is migrating toward 800-volt DC systems with advanced thermal management chains


India is positioned as a central hub for AI infrastructure development due to available power resources and favorable demographics


Industry partnerships, particularly with NVIDIA, are essential for developing optimized reference designs for AI applications


Resolutions and action items

Vertiv will continue expanding capacity in India to support growing AI infrastructure needs


Vertiv will invest further in India as a promising market and global AI hub


Development of scalable building blocks ranging from 12.5 megawatts to gigawatts for repeatable infrastructure deployment


Unresolved issues

Specific timeline for achieving one megawatt per rack power density


Detailed technical specifications for the 800-volt DC power infrastructure migration


Specific investment amounts or expansion plans for Vertiv’s India operations


How to address the skills gap and workforce requirements for rapid AI infrastructure deployment


Regulatory or policy considerations for large-scale AI data center development in India


Suggested compromises

None identified


Thought provoking comments

But when we talk AI, we also talk about data centers. But let me go then to the physical part of AI, not just what AI can do for us. There is an important, very important physical part of AI that sometimes is overlooked. And it shouldn’t, because it’s that physical part that makes AI actually possible.

Speaker

Giordano Albertazzi


Reason

This comment is insightful because it reframes the entire AI conversation by highlighting a critical but often invisible aspect – the physical infrastructure. While most discussions focus on AI capabilities and applications, Albertazzi draws attention to the foundational layer that enables all AI functionality. This perspective shift is important because it grounds the abstract concept of AI in tangible, engineering realities.


Impact

This comment established the central thesis of his presentation and created a clear departure from typical AI discussions. It set up a framework that would guide the entire talk, shifting focus from software capabilities to hardware necessities, and established his company’s relevance in the AI ecosystem.


Human intelligence happen in the brain. But the brain doesn’t survive without a body. What we are, what we do at Vertiv, make that body, provide that technology for that body so that the brain can function, and that brain is the IT stack.

Speaker

Giordano Albertazzi


Reason

This biological metaphor is particularly thought-provoking because it creates a powerful analogy that makes complex data center infrastructure relatable and understandable. It elegantly illustrates the symbiotic relationship between AI computing power (brain) and physical infrastructure (body), emphasizing that neither can function without the other.


Impact

This metaphor became a recurring theme throughout his presentation, providing a conceptual framework that helped the audience understand the holistic nature of AI infrastructure. It transformed technical discussions about power and cooling into something more intuitive and memorable.


What used to be a rack with IT inside 10, 15, 20 kilowatt per rack is rapidly becoming more dense and with more power and heat to dissipate in it. This is going to 30, 50, 150 kilowatts per rack all the way in the possible future. One megawatt per rack.

Speaker

Giordano Albertazzi


Reason

This comment is insightful because it quantifies the dramatic scale of change happening in data center infrastructure. The progression from 10-20 kilowatts to potentially one megawatt per rack illustrates the exponential nature of AI’s infrastructure demands. This puts concrete numbers to abstract concepts about AI’s growing computational needs.


Impact

These specific figures helped establish the urgency and magnitude of the infrastructure challenge. It moved the discussion from theoretical concepts to practical engineering problems, demonstrating why traditional data center approaches are becoming obsolete and why new solutions are critical.


Unit of compute is not even the pod. It’s the entire data center operating as one single computer. A unit of compute that can go all the way to gigawatts.

Speaker

Giordano Albertazzi


Reason

This comment is particularly thought-provoking because it challenges conventional thinking about computing architecture. The idea that an entire data center functions as a single computer represents a fundamental shift in how we conceptualize computing infrastructure. It suggests a level of integration and orchestration that goes far beyond traditional server-based thinking.


Impact

This comment elevated the discussion to a systems-thinking level, showing how AI is driving not just incremental improvements but fundamental architectural changes. It helped explain why traditional approaches to data center design and deployment are inadequate for AI applications.


India clearly central to the AI evolution revolution and central certainly in terms of the infrastructure that is being built… India has got the power availability. Certainly India has got the right demographics.

Speaker

Giordano Albertazzi


Reason

This comment is insightful because it positions India not just as a market for AI technology, but as a strategic hub for global AI infrastructure development. By highlighting power availability and demographics as key advantages, he identifies specific competitive advantages that make India attractive for large-scale AI infrastructure investment.


Impact

This comment shifted the presentation from technical discussion to strategic business implications, connecting the infrastructure challenges he described to specific opportunities in the Indian market. It provided a concrete example of how the theoretical concepts translate to real-world investment and development decisions.


Overall assessment

These key comments shaped the discussion by systematically building a compelling narrative that reframed AI from a software-centric conversation to an infrastructure-centric one. Albertazzi’s insights created a logical progression: first establishing the importance of physical infrastructure, then using relatable metaphors to explain complex relationships, quantifying the scale of change, describing new architectural paradigms, and finally connecting these concepts to specific market opportunities. The biological metaphor of brain and body became a unifying theme that made technical concepts accessible, while the specific power density figures provided concrete evidence of the transformation occurring in the industry. Together, these comments transformed what could have been a dry technical presentation into a compelling argument for why physical infrastructure deserves equal attention to AI software capabilities, ultimately positioning his company and India as critical enablers of the AI revolution.


Follow-up questions

How will data centers handle the transition to one megawatt per rack power density?

Speaker

Giordano Albertazzi


Explanation

This represents an extreme increase in power density that will fundamentally change data center design and requires further investigation into cooling, power distribution, and infrastructure requirements


What are the specific technical requirements and challenges of implementing 800-volt DC power infrastructure?

Speaker

Giordano Albertazzi


Explanation

Albertazzi mentioned this as the future direction for power architecture but didn’t elaborate on the technical implementation details or migration challenges


How can heat reuse from AI data centers be effectively implemented at scale?

Speaker

Giordano Albertazzi


Explanation

Heat reuse was mentioned as important for the thermal chain but requires further research on practical applications and economic viability


What are the specific advantages and limitations of treating entire data centers as single computing units operating at gigawatt scale?

Speaker

Giordano Albertazzi


Explanation

This represents a paradigm shift from server-based to data center-scale computing that needs further exploration of operational, technical, and management implications


How can prefabricated data center solutions maintain quality and customization while achieving 85% deployment time reduction?

Speaker

Giordano Albertazzi


Explanation

The significant time savings claim requires investigation into quality control processes, customization capabilities, and potential trade-offs in prefabricated approaches


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.