NVIDIA has introduced Spectrum-XGS Ethernet, a new technology aimed at connecting dispersed data centers into unified, giga-scale AI ‘super-factories.’ This launch enhances NVIDIA’s Spectrum-X Ethernet platform, providing what the company calls the third pillar of AI computing: scale-across infrastructure.
Unlike traditional methods that focus on scaling up within a single system or scaling out across servers in one location, Spectrum-XGS extends AI clusters across multiple, geographically distributed data centers to function as one.
This announcement comes as the demand for AI infrastructure is rapidly surpassing the capacity of individual facilities. Traditional Ethernet networking equipment, with its latency and performance variability, often falls short for the communication needs of advanced AI workloads. Spectrum-XGS Ethernet addresses these issues by creating high-speed, low-latency links between distant facilities, effectively transforming them into a single, cohesive AI factory.
NVIDIA founder and CEO Jensen Huang described this development as the rise of giant-scale AI factories, calling them “the essential infrastructure” of the AI industrial revolution. “We connect data centers across cities, countries, and continents into massive, giga-scale AI super-factories by adding scale-across to scale-up and scale-out capabilities with NVIDIA Spectrum-XGS Ethernet,” he stated.
The technology integrates seamlessly with the existing Spectrum-X platform, using algorithms that dynamically adjust to the physical distance between sites. This allows it to manage long-distance congestion, reduce jitter, and provide predictable performance. NVIDIA claims the system nearly doubles the performance of its Collective Communications Library by optimizing multi-GPU and multi-node communication across locations. The result is that data centers, whether separated by a few kilometers or entire continents, can operate as though they were one.
CoreWeave, a cloud provider specializing in high-performance computing and an early adopter of NVIDIA infrastructure, will be among the first to deploy Spectrum-XGS Ethernet to link its facilities. “We can integrate our data centers into a single, unified supercomputer with NVIDIA Spectrum-XGS, providing our customers with giga-scale AI that will speed up innovations in every industry,” said Peter Salanki, CoreWeave’s cofounder and Chief Technology Officer.
For multi-tenant, hyperscale AI environments, including what NVIDIA calls the world’s largest AI supercomputers, Spectrum-X Ethernet promises 1.6 times the bandwidth density of conventional Ethernet. The solution pairs NVIDIA Spectrum-X switches with its ConnectX-8 SuperNICs to deliver ultra-low latency, scalable performance, and end-to-end telemetry capabilities.
This development follows a wave of networking innovations from NVIDIA, including its Quantum-X silicon photonics switches, designed to reduce power use while enabling the interconnection of millions of GPUs across distributed locations. Spectrum-XGS builds on this momentum by offering enterprises, cloud providers, and AI specialists the ability to construct networks that scale globally without compromising performance.
Now commercially available as part of the NVIDIA Spectrum-X Ethernet platform, Spectrum-XGS Ethernet is positioned as a key enabler of the next phase of AI infrastructure, where multiple sites converge to form globally connected computing clusters.