Hyperscale data centres planned under Meta and NVIDIA deal
The partnership with Meta, combining advanced GPUs, CPUs and networking, supports next-generation AI workloads at global scale while improving efficiency.
Meta announced a multiyear partnership with NVIDIA to build large-scale AI infrastructure across on-premises and cloud systems. Plans include hyperscale data centres designed for both training and inference workloads, forming a core part of the company’s long-term AI roadmap.
Deployment will include millions of Blackwell and Rubin GPUs, plus expanded use of NVIDIA CPUs and Spectrum-X networking. According to Mark Zuckerberg, the collaboration is intended to support advanced AI systems and broaden access to high-performance computing capabilities worldwide.
Jensen Huang highlighted the scale of Meta’s AI operations and the role of deep hardware-software integration in improving performance.
Efficiency gains remain a central objective, with Meta increasing the rollout of Arm-based NVIDIA Grace CPUs to improve performance per watt in data centres. Future Vera CPU deployment is being considered to expand energy-efficient computing later in the decade.
Privacy-focused AI development forms another pillar of the partnership. NVIDIA Confidential Computing will first power secure AI features on WhatsApp, with plans to expand across more services as Meta scales AI to billions of users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
