Meta AWS Graviton Deal Expands AI Infrastructure
Meta AWS Graviton deal creates a multiyear Graviton commitment that could reprice AWS cloud exposure and shift enterprise revenue expectations.

KEY TAKEAWAYS
- Meta signed a multiyear, multibillion-dollar agreement with AWS to deploy Graviton CPUs at scale.
- Initial deployment reported as tens of millions of cores, with other reports citing chips-versus-cores disparities.
- Deal broadens AWS cloud exposure and informs capital allocation for Meta's AI infrastructure buildout.
HIGH POTENTIAL TRADES SENT DIRECTLY TO YOUR INBOX
Add your email to receive our free daily newsletter. No spam, unsubscribe anytime.
Meta Platforms signed a multiyear agreement with Amazon Web Services (AMZN) to deploy AWS Graviton processors at scale, aiming to accelerate agentic AI workloads and complement GPU training, the companies reported on April 24, 2026.
Deal Terms and Scale
Meta will deploy AWS Graviton chips under a multibillion-dollar, multiyear contract. Initial deployment will start with tens of millions of Graviton cores and can expand as Meta’s AI needs evolve. Some reports identified the chip variant as Graviton5 and specified the term as at least three years.
Coverage varied on scale metrics, citing hundreds of thousands to millions of chips, reflecting differences between chip and core counts rather than conflicting figures. The arrangement marks a significant expansion of Meta’s AI infrastructure capacity using CPU-based processors alongside its GPU fleets.
Strategic AI Context
The deal extends a long-standing partnership between Meta and AWS and follows Meta’s earlier $48 billion AI infrastructure commitments with other cloud providers. It targets next-generation AI, including agentic AI, by using Graviton CPUs to handle non-training workloads that complement GPU-based training.
This multiyear commitment broadens AWS’s cloud exposure and adds a sizable customer contract to Amazon’s enterprise backlog. For Meta, it signals a procurement strategy that pairs CPU capacity with GPUs to diversify AI compute architecture as workloads become more complex.
The announcement appeared on April 24, 2026, with initial reports emerging between 8:00 and 8:10 a.m. ET.





