Explained: Why tech giants are betting on Anthropic amid the AI compute bottleneck

/3 min read

ADVERTISEMENT

Inside Anthropic’s multibillion-dollar cloud deals with Google, Amazon and others—and how compute access is reshaping the AI power balance
Explained: Why tech giants are betting on Anthropic amid the AI compute bottleneck
 Credits: Shutterstock

Anthropic has taken centre stage after the latest wave of capital flowing into the company. With Google committing $40 billion in tranches, the funding showcases how the financing and scaling of AI companies are increasingly driven by computing power—and the bottlenecks they face. 

Fortune India breaks down Anthropic’s recent funding rounds and what they mean for AI companies. 

1. What does Google’s investment in Anthropic actually mean? 

Google’s commitment of $10 billion upfront, with another $30 billion tied to performance, resembles a funding round but functions more like a long-term infrastructure agreement. Here, Anthropic is expanding its ability to build and run AI systems at scale while receiving capital. In return, Google positions itself to capture a share of future demand by anchoring Anthropic to its cloud and compute ecosystem. 

The economics are already visible. Anthropic’s annualised revenue run rate has risen from $9 billion at the end of 2025 to $30 billion in 2026, with over 1,000 enterprise customers spending more than $1 million annually. This growth translates directly into rising compute demand, which cloud providers like Google are actively competing to absorb. 

2. How do Amazon, CoreWeave, and others fit into this picture? 

Google is not the only provider Anthropic is relying on. A week earlier, the Claude-maker partnered with Amazon Web Services, and the company is expected to reach nearly 1 gigawatt of compute capacity by the end of 2026, powered by custom Trainium chips. In parallel, Anthropic’s agreements with CoreWeave and Broadcom also secure multiple gigawatts of additional capacity starting 2027. 

At the same time, the company has a $50 billion commitment in place to build its own data centres in the U.S., signalling a shift towards ownership of compute as well. 

This shows Anthropic is building a diversified compute supply chain—spanning hyperscalers, specialised providers, and its own infrastructure—to ensure it is not constrained by a single bottleneck. 

3. Why is compute, and not just chips, at the centre of this story? 

The constraint of “chip shortage” is only one layer. According to the Organisation for Economic Co-operation and Development, progress in AI is increasingly driven by the ability to scale compute, which spans chips, data centres, networking, and energy systems. 

From an investment perspective, Brookfield Corporation estimates that up to 50% of data centre capital expenditure is now tied directly to compute infrastructure, including GPUs and interconnects. As AI is becoming an infrastructure-heavy system where training requires vast, continuous, and distributed compute, and scaling requires sustained access to both. 

This marks a departure from earlier cloud economics, where compute used to be one component among many, and has now become central. 

4. How does this compare with OpenAI’s approach? 

OpenAI has arrived at a similar endpoint through a different route with Microsoft and Azure. With that partnership, the ChatGPT maker ensures priority access to large-scale compute, effectively outsourcing infrastructure while locking into a single ecosystem. More recently, it has expanded into additional partnerships, including agreements with alternative chip and infrastructure providers such as Cerabras, to supplement that capacity. 

Anthropic, by contrast, is distributing its dependencies across multiple providers—Amazon, Google, CoreWeave—while simultaneously investing in its own infrastructure. 

5. Is AI demand outpacing the infrastructure built to support it? 

Evidence suggests it is. Anthropic’s own growth provides a proxy. Its enterprise base has customers spending over $1 million annually, has doubled to more than 1,000 in a matter of months. 

The OECD’s analysis shows that AI systems rely on a tightly coupled infrastructure stack, chips, data centres, power, and networks. Scaling any one layer without the others creates imbalances. At present, demand for compute, particularly for inference, where models run continuously, is accelerating faster than new capacity can be deployed. 

There is a bottleneck, with semiconductors facing constraints, data centres taking time and money to build, and energy availability at any given location. 

Brookfield’s finding that compute infrastructure accounts for up to half of data centre capital expenditure captures how capital-intensive this has become. Anthropic’s move toward gigawatt-scale compute capacity further illustrates the magnitude of demand. 

6. What does this mean for the future of AI competition? 

The next phase of AI competition will be determined less by who builds the best model and more by who can afford to run it at scale. Deloitte, in its Tech Trends 2026 report, notes that enterprises are rapidly shifting from pilot projects to production deployments, driving persistent, high-volume compute demand, particularly for inference workloads. This transition turns AI into a continuous operational cost, with some organisations already reporting infrastructure spends running into tens of millions of dollars per month.  

This changes the competitive structure of the industry. Companies with secured access to large-scale compute, through long-term cloud partnerships or capital backing, gain a structural edge, while smaller players struggle to sustain these ongoing costs. This results in an AI ecosystem where infrastructure access determines who can compete at scale.