OpenAI signed a $38 billion, seven-year cloud agreement with Amazon Web Services to run and scale its core AI workloads, the companies said in an announcement.

Under the agreement, AWS will provide Amazon EC2 UltraServers with “hundreds of thousands” of Nvidia GPUs and capacity to scale to “tens of millions of CPUs,” with all capacity targeted to be deployed before the end of 2026 and the ability to expand into 2027 and beyond, AWS said.

“Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said; AWS CEO Matt Garman called AWS’s infrastructure the “backbone” for OpenAI’s ambitions.

For enterprises on AWS, OpenAI noted its open-weight foundation models are available on Amazon Bedrock, citing customers including Thomson Reuters and Peloton using those models for agentic workflows and analysis.

OpenAI added that use of AWS infrastructure begins immediately under the partnership.