OpenAI Inks $38bn Seven-Year Cloud Deal with AWS to Accelerate AI Model Training

OpenAI and Amazon Web Services (AWS) have finalized a $38 billion, seven-year agreement granting the artificial intelligence leader access to “hundreds of thousands” of Nvidia GPUs hosted on AWS’ cloud services, according to a joint announcement. 

The arrangement will enable OpenAI to significantly ramp up computing resources for AI model training as the sector demands ever-greater processing power in the pursuit of advanced AI capabilities.

Effective immediately, OpenAI will utilize AWS infrastructure to power its AI workloads, with targeted full deployment of the allocated computing capacity slated for completion by the end of 2026. The deal also provides the flexibility to expand further through at least 2027. 

This strategic partnership follows a shift in OpenAI’s cloud strategy, as Microsoft—previously OpenAI’s exclusive cloud provider—has relinquished its unique hosting rights and first option to manage OpenAI’s AI workloads.

Amazon highlighted the scale of the partnership, confirming that OpenAI will initially tap into hundreds of thousands of cutting-edge Nvidia GPUs, with an option to scale up to tens of millions of CPUs, designed to accommodate rapidly growing and complex agentic workloads.

The latest developments arrive on the heels of OpenAI’s transition to a for-profit structure and a redefined agreement with Microsoft, which now obtains rights to OpenAI’s technology until the advent of artificial general intelligence (AGI). Under the revised terms, OpenAI is permitted to collaborate with other partners to develop selected AI products and to release specific open-weight models.

The AWS collaboration underscores the mounting intensity of the global AI race, as leading technology firms compete to unlock computing resources that could propel breakthroughs in synthetic intelligence.