Anthropic locked in one of the largest computing deals in AI history. The agreement with Google and Broadcom will deliver 3.5 gigawatts of processing power starting in 2027. That's enough electricity to run a mid-sized city, and it's all going toward making Claude more powerful.

The Deal Behind the Numbers
Anthropic announced a new agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity. TPUs are Google's custom chips built specifically for AI workloads. They power everything from Claude's code generation to document analysis.
This isn't starting from zero. 1 gigawatt is already coming online in 2026 under a previous deal. The new 3.5 gigawatts stack on top of that. Broadcom committed to designing and supplying future generations of Google's TPUs through 2031, making this a decade-long infrastructure play.
Anthropic's CFO Krishna Rao called it the company's most significant compute commitment to date. The reason is simple: demand is outpacing supply.
To put the scale in context, global Bitcoin mining draws roughly 13 to 25 gigawatts of continuous power. Anthropic alone is now securing a meaningful fraction of that from a single partnership.

Why a Chip Deal Matters for Your 9-to-5
More compute means more capable AI models, faster responses, and broader availability. That quarterly report Claude takes 30 seconds to analyze today? It could take 10 seconds next year.
The growth numbers tell the story. Anthropic's run-rate revenue has surpassed $30 billion. At the end of 2025 it was roughly $9 billion. That's more than triple in four months.
Enterprise adoption is accelerating even faster. The number of clients spending over $1 million a year on Claude has crossed 1,000. Two months ago it was 500. These aren't hobbyists. They're legal teams, financial analysts, engineering departments, and operations groups running Claude through AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry.
Your company is either already using these tools or competing against someone who is. The infrastructure being locked in today directly determines what shows up in your workflow 12 months from now.

The Bigger Picture: AI Infrastructure Is the New Arms Race
Broadcom's stock rose 3% in after-hours trading on the announcement. Investors see what's happening. Broadcom is now the chip implementation layer for two of the three largest AI companies: Anthropic and OpenAI. It separately secured a $10 billion custom silicon program with OpenAI last October.
Meanwhile, Anthropic is navigating one of the most unusual corporate disputes in recent memory. A federal judge blocked the Pentagon's attempt to label Anthropic a supply chain risk, a designation normally reserved for foreign adversaries. The dispute started when Anthropic refused to let the military use Claude without restrictions on autonomous weapons and mass surveillance.
The judge's ruling was blunt. She wrote that nothing in the governing statute supports what she called an "Orwellian notion" that an American company can be branded a potential adversary for disagreeing with the government.
Despite the legal battle, Anthropic's commercial performance has only accelerated. Revenue tripled. Enterprise clients doubled. And now 3.5 gigawatts of new compute is on the way.
For professionals who rely on Claude through any major cloud platform, the takeaway is straightforward. The service isn't going anywhere. And the next generation of AI tools will be built on the computing power being locked in right now.
If your team is still in the "experimenting with AI" phase, this is the signal that the window for catching up is narrowing fast.
