- The cluster supports Anthropic and its Claude model for both training and inference, and is built using AWS‑designed Trainium2 silicon.
- With roughly 500,000 chips now active, AWS says Rainier is 70 % larger than any previous internal platform and will cross one million chips by end of 2025.
What happened: AWS activates Project Rainier AI supercluster
AWS has officially activated Project Rainier, a major AI infrastructure installation now populated with nearly 500,000 of its proprietary Trainium2 chips. The cluster, first revealed during AWS’s Re:Invent event in late 2024, is spread across multiple data‑centre sites and uses specialised architecture—“UltraServers” with 64 Trainium2 chips each, interconnected via high‑speed links.
As part of the deployment, AWS said its partner Anthropic will run its Claude AI models on the Rainier cluster, with the ambition to scale usage to more than one million Trainium2 chips by the end of 2025. Some sites already in operation include a large Indiana campus with multiple buildings and a potential 2.2 GW power draw.
Also Read: US tightens chip exports to Huawei and SMIC
Also Read: Alibaba agrees to pay $433.5M to settle security fraud class action
Why it’s important
The rollout of Project Rainier marks a significant step in the shift from general‑purpose GPU platforms towards custom‑designed AI silicon at hyperscale. By building its own train‑and‑serve hardware, AWS gains tighter control over the stack—chip to cloud—to optimise cost, performance and energy efficiency.
For Anthropic and similar AI firms, access to this volume of custom compute opens the door to training larger, more capable models more rapidly. The fact that AWS claims the cluster is already 70 % larger than any previous internal offering underscores how fast the compute arms race is moving.
From a competitive standpoint, AWS’s large‑scale deployment places pressure on other cloud providers and chip makers (including those relying on GPUs) to match performance and infrastructure scale. The speed at which AWS has gone from announcement to launch in under a year hints at a new benchmark in AI infrastructure delivery.
In summary, Project Rainier doesn’t just represent more compute—it signals a new era in cloud AI infrastructure, one where vertical integration of silicon, servers and data‑centres becomes a strategic differentiator.
 
									 
					
