NVIDIA launches Spectrum-XGS to connect AI data centres
Traditional AI data centres face space, power, and network limits, prompting NVIDIA to offer a ‘scale-across’ solution.
AI data centres face growing pressure as computing demands exceed the capacity of single facilities. Traditional Ethernet networks face high latency and inconsistent transfers, forcing companies to build larger centres or risk performance issues.
NVIDIA aims to tackle these challenges with its new Spectrum-XGS Ethernet technology, introducing ‘scale-across’ capabilities. The system links multiple AI data centres using distance-adaptive algorithms, congestion control, latency management, and end-to-end telemetry.
NVIDIA claims the improvements can nearly double GPU communication performance, supporting what it calls ‘giga-scale AI super-factories.’
CoreWeave plans to be among the first adopters, connecting its facilities into a single distributed supercomputer. The deployment will test if Spectrum-XGS can deliver fast, reliable AI across multiple sites without needing massive single-location centres.
While the technology promises greater efficiency and distributed computing power, its effectiveness depends on real-world infrastructure, regulatory compliance, and data synchronisation.
If successful, it could reshape AI data centre design, enabling faster services and potentially lower operational costs across industries.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!