Anthropic to pour $100 billion into AWS over a decade as it standardises on Amazon’s chips and cloud for training Claude and future AI models.

Amazon and Anthropic have expanded their partnership with a fresh $5-billion investment, as part of a broader agreement that could see Amazon commit up to $25 billion in total, including an additional $20 billion tied to commercial milestones.
The investment builds on the roughly $8 billion Amazon has already committed to Anthropic. Alongside this, Anthropic has agreed to spend more than $100 billion over the next decade on Amazon Web Services (AWS), making it its primary cloud and training partner for large-scale AI workloads.
The companies have been working together since 2023, with more than 100,000 customers currently running Anthropic’s Claude models on AWS. Claude is also among the most widely used model families on Amazon Bedrock, AWS’s platform for accessing third-party and proprietary AI models.
The expanded collaboration centres on infrastructure. Anthropic will use multiple generations of Amazon’s custom AI chips—Trainium2, Trainium3, Trainium4, and future versions—along with tens of millions of Graviton CPU cores to train and deploy its models.
As part of the agreement, Anthropic will secure up to 5 gigawatts of compute capacity. The companies are also jointly working on Project Rainier, an AI compute cluster built using nearly half a million Trainium2 chips, which is currently being used to train and run Claude models and future versions.
Anthropic is also working with Amazon’s Annapurna Labs to optimise Trainium chips, providing feedback from model training workloads to inform future chip design.
The partnership includes product integration as well. AWS customers will be able to access Anthropic’s native Claude platform directly through their AWS accounts, without requiring separate credentials or billing systems, while continuing to use existing AWS access controls and monitoring tools.
The deal comes amid intensifying competition among large technology companies to secure both AI models and the infrastructure that powers them.
Microsoft has committed more than $10 billion to OpenAI, integrating its models across Azure, while Google has also invested in Anthropic and is developing its own models.
At the hardware level, Nvidia continues to dominate the market for AI training chips through its GPUs, which are widely used across the industry. Amazon, through its Trainium and Graviton chips, is positioning in-house silicon as an alternative for large-scale AI workloads.
Anthropic’s commitment to AWS secures long-term infrastructure demand for Amazon, as AI companies scale compute usage. The agreement also includes expansion of inference capabilities across Asia and Europe to support growing international demand for Claude models.
Enterprise adoption of Claude on AWS is already underway. Companies including Lyft and Pfizer are using the models for applications such as customer support automation and document analysis within existing AWS environments.