NVIDIA Spectrum-X Expands With Open MRC Protocol
NVIDIA has added support for Multipath Reliable Connection (MRC) to its Spectrum-X Ethernet platform, bringing an open RDMA transport protocol already running inside some of the world's largest AI training clusters to the broader industry. The Spectrum-X MRC combination has now been formalized as a public specification through the Open Compute Project, marking a shift from proprietary deployment to open standard.
What Multipath Reliable Connection Actually Does
Traditional RDMA connections tie traffic to a single network path. MRC changes that by letting one connection distribute data across multiple paths simultaneously, improving throughput, load balancing, and fault tolerance across large-scale AI training fabrics.
NVIDIA describes it as swapping a single-lane road for a full street grid with real-time routing built in. When congestion builds on one path, traffic shifts elsewhere automatically. When a path fails entirely, hardware-level detection and rerouting kick in within microseconds, keeping GPU clusters synchronized during long training runs where even brief interruptions can stall an entire job.
According to the NVIDIA blog post, the protocol also supports intelligent retransmission on data loss, fine-grained traffic visibility for administrators, and hardware-accelerated load balancing across multiplanar network designs that OpenAI deploys in production.
Who Is Already Running It at Scale
MRC was not developed in isolation. NVIDIA collaborated with AMD, Broadcom, Intel, Microsoft, and OpenAI on the protocol, and several of those partners have already deployed it in production environments.
OpenAI runs Spectrum-X MRC across its Blackwell-generation infrastructure. Sachin Katti, head of industrial compute at OpenAI, said: "Deploying MRC in the Blackwell generation was very successful and was made possible by a strong collaboration with NVIDIA. MRC's end-to-end approach enabled us to avoid much of the typical network-related slowdowns and interruptions and maintain the efficiency of frontier training runs at scale."
Microsoft's Fairwater and Oracle Cloud Infrastructure's Abilene data centers, both purpose-built for training and deploying frontier large language models, also rely on MRC as part of their core AI Ethernet fabric design. The open specification release through OCP now makes the same transport model available to any operator building at comparable scale.
What AI Infrastructure Teams Should Watch Next
Spectrum-X Ethernet now supports multiple RDMA transport options side by side. Operators can run Spectrum-X Adaptive RDMA, MRC, or other custom protocols across NVIDIA ConnectX SuperNICs and Spectrum-X switches, all with support for multiplanar network designs suited to gigascale AI networking deployments reaching hundreds of thousands of GPUs.
The key question going forward is whether MRC gains traction beyond NVIDIA's existing Spectrum-X deployments. Its open specification status gives cloud providers, hardware vendors, and infrastructure teams a clearer path to evaluate and adopt it independently. That wider uptake will determine whether MRC becomes a genuine industry standard or remains closely tied to NVIDIA's own ecosystem.
Full technical details on the MRC specification and the Spectrum-X Ethernet platform are available in NVIDIA's announcement on the NVIDIA Blog.