Meta to Deploy Millions of NVIDIA GPUs as AI Buildout Pushes CapEx Higher

NVIDIA and Meta have struck a multi-year partnership to expand Meta’s use of NVIDIA chips and networking gear as the social media company ramps up spending on AI infrastructure.
The agreement, announced on Tuesday, covers large-scale deployments of Nvidia’s current Blackwell graphics processors and its next-generation Rubin GPUs, alongside Nvidia’s Grace and planned Vera server CPUs.
Meta also plans to adopt Nvidia’s Spectrum-X Ethernet platform in conjunction with Meta’s Facebook Open Switching System, and to use Nvidia’s “confidential computing” technology to support privacy-focused AI processing for WhatsApp.
Financial terms were not disclosed. Reuters reported the deal calls for Meta to buy “millions” of Nvidia AI chips over multiple generations, reinforcing Meta’s position as one of Nvidia’s largest customers even as Meta continues work on internally designed AI accelerators.
For Nvidia, the announcement highlights a push to sell more of the data-center stack to hyperscale customers beyond GPUs, including Arm-based CPUs and networking. Reuters noted Nvidia is positioning its Grace and Vera processors for broader data-processing tasks and for emerging AI “agent” workloads, aiming to widen its role inside AI data centers that have historically relied on x86 CPUs from Intel and AMD.
Meta said the expanded collaboration will support hyperscale facilities optimized for both training and inference, and that it expects the partnership to improve “performance per watt” in data-center operations through CPU deployment and software optimization. The companies also said Meta has adopted Nvidia’s confidential computing for WhatsApp “private processing,” a design approach intended to let AI features run while keeping data protected from exposure in broader infrastructure layers.
The deal lands as Meta sharply increases its capital spending plans tied to AI. In its latest earnings release, Meta said it expects 2026 capital expenditures, including principal payments on finance leases, in the range of $115 billion to $135 billion, citing increased investment to support its AI efforts and core business. Reuters separately reported Meta also forecast higher 2026 expenses as it steps up AI hiring and infrastructure buildout.
Meta’s guidance is part of a broader surge in AI-related data-center investment across Big Tech, with companies racing to secure chips, networking equipment, land and power. Credit and ratings analysts have characterized the current cycle as a period of hyperscalers “hyperspending,” reflecting how AI training clusters and inference deployments are driving unusually large infrastructure budgets.
Investors have increasingly focused on whether the wave of AI capital expenditures will translate into durable revenue growth and margins. A separate Reuters report on the same day pointed to rising debate around the valuations of large AI-exposed technology stocks amid concerns that heavy AI investments could take longer to pay off than expected.
For Meta, the scale of the Nvidia deployment underscores both the urgency of expanding compute capacity and the practical constraints of building a mix of in-house and third-party silicon. Reuters reported Meta has been developing its own AI chips while also exploring alternatives such as Google’s Tensor Processing Units in some discussions, but the new agreement signals continued reliance on Nvidia for large volumes of cutting-edge accelerators.

