Anthropic’s SpaceX Deal Makes Compute a Product Feature
Anthropic’s SpaceX compute deal shows why AI product experience now depends on capacity as much as model quality.
Anthropic’s latest Claude limit increase is really an infrastructure story. The company says it has signed a partnership with SpaceX to use all compute capacity at SpaceX’s Colossus 1 data center, adding more than 300 megawatts and over 220,000 NVIDIA GPUs within the month.
That capacity is immediately showing up in the product. Claude Code’s five-hour limits are doubling for Pro, Max, Team, and seat-based Enterprise plans. Peak-hour reductions are being removed for Pro and Max users, and Opus API rate limits are rising.
For PMs, the lesson is that AI product experience is now constrained by capacity as much as capability. Usage limits, latency, and reliability decide whether users trust an AI workflow enough to make it habitual.
The bigger signal: frontier AI companies are assembling compute from hyperscalers, GPU clusters, and now SpaceX. In AI, infrastructure is becoming roadmap. The products that win may be the ones that can keep intelligence reliably available at the moments users need it most.
Source: Anthropic