AWS becomes key partner in OpenAI’s $38 billion AI growth plan 

The multi-year deal positions AWS as the backbone of OpenAI’s computing operations, supporting global AI expansion through 2027 and beyond.

OpenAI and AWS have announced a $38 billion partnership to expand AI capacity and scale next-generation models using AWS’s advanced infrastructure.

Amazon Web Services (AWS) and OpenAI have entered a $38 billion, multi-year partnership that will see OpenAI run and scale its AI workloads on AWS infrastructure. The seven-year deal grants OpenAI access to vast NVIDIA GPU clusters and the capacity to scale to millions of CPUs.

The collaboration aims to meet the growing global demand for computing power driven by rapid advances in generative AI.

OpenAI will immediately begin using AWS compute resources, with all capacity expected to be fully deployed by the end of 2026. The infrastructure will optimise AI performance by clustering NVIDIA GB200 and GB300 GPUs via Amazon EC2 UltraServers for low-latency, large-scale processing.

These clusters will support tasks such as training new models and serving inference for ChatGPT.

OpenAI CEO Sam Altman said the partnership would help scale frontier AI securely and reliably, describing it as a foundation for ‘bringing advanced AI to everyone.’ AWS CEO Matt Garman noted that AWS’s computing power and reliability make it uniquely positioned to support OpenAI’s growing workloads.

The move strengthens an already active collaboration between the two firms. Earlier this year, OpenAI’s models became available on Amazon Bedrock, enabling AWS clients such as Peloton, Thomson Reuters, and Comscore to adopt advanced AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!