Artificial Intelligence and DaaS May 12, 2026 6 min

Anthropic, SpaceXAI, and the New Compute Race in AI

SpaceXAI has reached an agreement to provide Anthropic with large-scale AI compute capacity from its Colossus 1 facility. The deal grants Anthropic access to more than 300 megawatts of capacity and roughly 220,000 NVIDIA GPUs for training and inference workloads. The agreement marks one of the most significant infrastructure moves in the frontier AI market because it gives Anthropic a dedicated block of capacity at a time when Claude, Claude Code, and Claude-based enterprise deployments are scaling rapidly. It also shows how SpaceXAI is beginning to turn Colossus into an external compute platform for major AI companies rather than reserve the facility for internal xAI model development. The deal illustrates how compute has become a primary constraint on frontier AI competition, alongside model architecture, product experience, enterprise distribution, and developer adoption.

IDC’s point of view

Compute is becoming a core competitive asset for frontier AI labs

The Anthropic-SpaceXAI agreement shows how compute has moved from a background operational concern to a frontline competitive variable among leading AI labs. These companies now compete on their ability to secure large blocks of training and inference capacity in addition to model architecture, product experience, and enterprise distribution. Anthropic’s most recent capital raise set a post-money valuation of roughly $380 billion and disclosed an annual revenue run rate of about $14 billion as of early 2026. Subsequent reporting points to expectations that annual recurring revenue could reach roughly $44 billion, which supports a path toward a trillion-dollar valuation over the medium term. At that scale, Anthropic needs additional compute to keep model performance, product quality, and enterprise adoption on pace with demand.

Anthropic also operates with a smaller base of committed training and serving capacity than OpenAI, whose recent financing and infrastructure partnerships give it access to substantial dedicated training and inference capacity. Anthropic’s relationships with Amazon and Google remain central to its infrastructure strategy, but those relationships have not fully closed the gap in committed compute. The SpaceXAI agreement narrows that gap on terms that are difficult to secure through conventional cloud channels. Training remains essential, but inference now carries more of the operational burden as Claude, Claude Code, and agentic workflows become part of daily enterprise and developer work. The relevant demand unit is now the completed workflow — each step in planning, retrieval, tool selection, API calls, code execution, validation, and summarization issues one or more model calls. That structure raises inference volume, tightens latency expectations, and heightens reliability requirements that Anthropic will address through expanded Claude capacity at Colossus 1.

Power, land, and facilities are moving upstream in AI competition

Competition among frontier AI companies increasingly depends on their ability to secure and operate the infrastructure required for large-scale AI training and inference. Land, power availability, real estate, zoning, permitting, cooling, networking, and data center operations now sit upstream of model development. Colossus 1 is a large AI data center in Memphis, Tennessee, originally associated with xAI and now available to Anthropic through SpaceX. Its importance lies less in the name of the facility than in the infrastructure base behind it: more than 300 megawatts of compute capacity, more than 220,000 NVIDIA GPUs, and a site already organized around the power, cooling, networking, and operations required for AI-scale compute. The companies that can assemble and operate such environments gain a practical advantage because large-scale model development depends on infrastructure that takes years to secure, permit, build, and stabilize.

NVIDIA’s grip on frontier compute remains intact

The Anthropic-SpaceXAI agreement reinforces NVIDIA’s dominance in frontier-scale AI compute. The agreement lands amid a broader debate about whether custom silicon from Amazon and Google can reduce frontier labs’ dependence on NVIDIA. Anthropic is a useful test case for that debate because it relies heavily on Amazon and Google for infrastructure — both of which have invested heavily in alternatives to NVIDIA through AWS Trainium and Inferentia, and through Google TPUs. Yet Anthropic’s largest new capacity agreement still centers on NVIDIA GPUs. That weakens the case that custom silicon is close to displacing NVIDIA for the largest frontier-scale capacity expansions. The agreement also shows that the broader market for NVIDIA capacity still does not fully meet the needs of the largest frontier labs, as Anthropic chose SpaceXAI over neocloud providers such as CoreWeave, Lambda, and Nebius for a larger, more concentrated block of power, facilities, networking, operations, and NVIDIA GPUs.

SpaceXAI adds an AI infrastructure premium to the SpaceX valuation case

AI infrastructure adds a new value layer to SpaceX because Colossus turns scarce AI infrastructure into an asset that can serve more than internal model development. SpaceX already has large value drivers in space transportation and Starlink broadband, but Colossus gives the combined company exposure to one of the most constrained parts of the AI market: dense compute capacity backed by power, facilities, NVIDIA GPUs, and operational depth. Elon Musk noted that xAI would become part of SpaceX and that the combined entity would be called SpaceXAI. The Anthropic agreement shows that SpaceXAI can sell large-scale compute capacity to a major AI lab with demanding training and inference requirements. The strategic asset is the combination of power, facilities, NVIDIA capacity, operating discipline, and Musk’s ability to negotiate deals that extend beyond ordinary compute supply. That scarcity can translate into pricing power, strategic deal flow, and a higher valuation multiple than SpaceX would command from space transportation and Starlink broadband alone.

What this means for the market

The Anthropic agreement signals that SpaceXAI is becoming a meaningful infrastructure force in frontier AI, not just a rocket company with a side bet on AI. For Anthropic, the deal closes a meaningful compute gap and removes a ceiling on how fast Claude and its associated products can scale. For the broader market, it reinforces two durable realities: NVIDIA’s grip on frontier-scale compute remains intact, and the companies that control power, land, and facilities at scale are sitting on an increasingly scarce and valuable asset. How quickly Anthropic can translate this capacity into measurable reliability and performance gains will be one of the more important stories to watch in enterprise AI over the next 12 to 18 months.

To go deeper on what this deal means for enterprise AI strategy, read IDC’s full research on the Anthropic-SpaceXAI agreement.

Also publishing: IDC’s analysis of the Cursor-SpaceXAI deal and what it means for the agentic coding market.

Arnal Dayaratna

Arnal Dayaratna - Research Vice President, Software Development

Dr. Arnal Dayaratna is Research Vice President, Software Development at IDC. Arnal focuses on software developer demographics, trends in programming languages and other application development tools, and the intersection of these development environments and the many emerging technologies that are enabling…

Subscribe to our blog

Frequently Asked Questions: The Anthropic-SpaceXAI Compute Deal