Meta and NVIDIA have signed a multiyear partnership that will support “the large-scale deployment of NVIDIA CPUs, networking and millions of NVIDIA Blackwell and Rubin GPUs” across Meta’s AI infrastructure.

What NVIDIA and Meta disclosed

NVIDIA said the partnership spans multiple parts of its data center stack, including Blackwell and Rubin GPU platforms, Spectrum-X networking and Arm-based Grace CPUs, with collaboration on Vera CPUs and “potential for large-scale deployment in 2027.”

NVIDIA has positioned Spectrum-X as an Ethernet platform purpose-built for hyperscale AI networks, aiming to scale AI fabrics to very large GPU counts.

In a prior Spectrum-X release, NVIDIA said Meta would integrate Spectrum Ethernet switches into its Facebook Open Switching System (FBOSS) environment and cited “millions of GPUs” as the scale target for Spectrum-X interconnects.

NVIDIA also said Meta will deploy systems in on-premises data centers and through NVIDIA Cloud Partner deployments, but did not disclose the split between those locations or any unit-by-year delivery schedule.

Capex context that is in Meta’s own record

The deal lands as Meta has guided investors to $115 billion to $135 billion in 2026 capital expenditures, including principal payments on finance leases, driven by increased infrastructure investment to support its Meta Superintelligence Labs efforts and core business.

Meta’s 2026 capex guidance is part of its published earnings materials and also appears in the company’s SEC-filed exhibit covering full-year 2025 results.

Meta also disclosed that 2025 capital expenditures, including principal payments on finance leases, were $72.22 billion for the full year.

WhatsApp “Private Processing” and confidential compute, from Meta’s technical materials

The NVIDIA partnership announcement references confidential computing, and Meta has separately published technical documentation describing “Private Processing” for WhatsApp.

Meta’s “Private Processing” technical whitepaper describes a design intended to enable AI features while reducing access to user content during processing and it discusses the role of confidential compute hardware in that architecture.

Meta’s engineering blog post on Private Processing also describes the same initiative as a secure cloud environment intended to support AI tooling while limiting exposure of sensitive data.

How this sits alongside Meta’s other infrastructure paths

Meta’s NVIDIA partnership adds to a record of Meta pursuing multiple infrastructure tracks in parallel, including open rack work with the Open Compute ecosystem.

AMD has publicly tied its “Helios” rack-scale platform to Meta’s Open Compute Project work. In an AMD blog post, AMD said Helios is built on Meta’s 2025 OCP Open Rack for AI design.

AMD’s investor press release on the same date said Helios is aligned with an Open Rack standard contributed to OCP by Meta.

Personalized Feed
Personalized Feed