[wp_tech_share]
follow us in feedly
Share

NVIDIA recently introduced fully integrated systems, such as the GB200/300 NVL72, which combine Blackwell GPUs with Grace ARM CPUs and leverage NVLink for high-performance interconnects. These platforms showcase what’s possible when the CPU–GPU connection evolves in lockstep with NVIDIA’s accelerated roadmap. As a result, ARM achieved a 25 percent revenue share of the server CPU market in 2Q25, with NVIDIA representing a significant portion due to strong adoption by major cloud service providers.

However, adoption of such proprietary systems may not reach its full potential in the broader enterprise market, as many customers prefer the flexibility of open ecosystems and established CPU vendors that the x86 architecture offers. Yet the performance of GPU-accelerated applications on x86 has long been constrained by the pace of the PCIe roadmap for both scale up and scale out connectivity. While GPUs continue to advance on an 18-month (or shorter) cycle, CPU-to-GPU communication over PCIe has progressed more slowly, often limiting system-level GPU connectivity.

The new Intel–NVIDIA partnership is designed to close this gap. With NVLink Fusion available on Intel’s x86 platforms, enterprises can scale GPU clusters on familiar infrastructure while benefiting from NVLink’s higher bandwidth and lower latency. In practice, this brings x86 systems much closer to the scalability of NVIDIA’s own NVL-based rack designs, without requiring customers to fully commit to a proprietary stack.

For Intel, the agreement ensures continued relevance in the AI infrastructure market despite the lack of a competitive GPU portfolio. For server OEMs, it opens up new design opportunities: they can pair a customized Intel x86 CPUs with NVIDIA GPUs in a wider range of configurations—creating more differentiated offerings from individual boards to full racks—while retaining flexibility for diverse workloads.

The beneficiaries of this development include:
  • NVIDIA, which extends NVLink adoption into the broader x86 ecosystem.
  • Intel, which can play a key role in the AI systems market despite lacking a competitive GPU portfolio, bolstered by NVIDIA’s $5 billion investment.
  • Server OEMs, which gain more freedom to innovate and differentiate x86 system designs.
At the same time, there are competitive implications:
  • AMD is unlikely to participate, as its CPUs compete with Intel and its GPUs compete with NVIDIA. The company continues to pursue its own interconnect strategy through UA Link.
  • ARM may see reduced momentum for external enterprise AI workloads if x86 platforms can now support higher GPU scalability. That said, cloud providers may continue to use ARM for internal workloads and could explore custom ARM CPUs with NVLink Fusion.

Ultimately, NVLink Fusion on Intel x86 platforms narrows the gap between systems based on a mainstream architecture and NVIDIA’s proprietary designs. It aligns x86 and GPU roadmaps more closely, giving enterprises a more scalable path forward while preserving choice across CPUs, GPUs, and system architectures.