In Partnership With

IT LIVES.

Phase one of the Stargate campus has gone from a patch of dirt to a live site in Texas in just 15 months. In making this happen, Crusoe, Oracle, and OpenAI have achieved something as yet unheard of in this industry:

Actually delivering on a multi-hundred-billion-dollar press release.

I’m Ben Baldieri. Every week, I break down what’s moving in GPU compute, AI infrastructure, and the data centres that power it all.

Here’s what’s inside this week:

Let’s get into it.

The GPU Audio Companion Issue #67

Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.

Crusoe Brings Phase One of Stargate Online

Crusoe has switched on the first two buildings of its Abilene campus. But the headline isn’t the size. It’s the speed.

Ground broke in June 2024. Stargate was announced in January 2025. By June 2025, GPUs were being delivered. By September 2025, training and inference jobs are live.

This isn’t just another data centre ribbon-cutting. It’s the first physical proof point of the $500B Stargate commitment. And it lands just after OpenAI, Oracle, and SoftBank expanded the program with five new US sites, pushing planned capacity close to 7GW.

Why this matters:

Nscale Raises Another $433m

Nscale has closed a $433 million Pre-Series C SAFE, just days after a record-breaking $1.1 billion Series B.

A SAFE isn’t priced equity. It’s a bridge. Investors wire money now in exchange for future equity at the next round. No debt. No immediate dilution.

Just instant capital to keep building.

That’s why the speed matters.

Blue Owl, Dell, NVIDIA, and Nokia all invested in the Series B. Now they’ve doubled down. Immediately. Taking more exposure before the C round sets the valuation.

If that’s not conviction, I don’t know what is.

Why this matters:

  • $1.5B of funding over two rounds, with back-to-back announcements, and follow-on participation cements Nscale as Europe’s emerging AI infrastructure leader.

  • Execution risk is always a factor (especially considering how little Nscale has online relative to the announcements), but even at this early stage, all signs point in one direction: a future public offering.

CoreWeave Locks in $14B Meta Deal

CoreWeave has signed a $14.2 billion agreement to supply Meta with access to NVIDIA’s latest GB300 systems.

The deal diversifies CoreWeave’s customer base beyond Microsoft, which made up 71% of revenue as recently as June, and builds on the multibillion-dollar commitment signed with OpenAI last week. For Meta, the agreement is part of an AI spending spree that could push 2025 capex as high as $72 billion.

Why this matters:

  • At $14B, this is one of the clearest signals yet that hyperscale AI spend is shifting from pilots to long-term infrastructure commitments.

  • Meta is now a cornerstone CoreWeave client, reducing dependency on Microsoft and boosting investor confidence.

  • With both OpenAI and Meta locked in, CoreWeave is cementing its position as the neocloud most closely aligned with frontier model builders.

Anthropic Launches Claude Sonnet 4.5

Anthropic has launched Claude Sonnet 4.5, positioning it as the strongest coding model on the market and a step-change in agentic AI.

The model leads on SWE-bench Verified, outpaces rivals on OSWorld’s real-world computer tasks, and posts major gains in math, reasoning, and domain knowledge across finance, law, medicine, and STEM.

Alongside the model, Anthropic rolled out new developer tools. Claude Code now has checkpoints, a VS Code extension, and deeper context editing. The Claude API adds long-memory support, while Claude apps bring code execution and file creation directly into the chat. The new Claude Agent SDK opens up the same infrastructure Anthropic uses internally, giving developers the foundation to build long-running autonomous agents.

Why this matters:

Cerebras Closes $1.1B Series G

Cerebras just locked in one of the year’s largest rounds: $1.1 billion at an $8.1 billion valuation, led by Fidelity and Atreides, with Tiger Global, Valor Equity Partners, and 1789 Capital piling in alongside existing backers like Altimeter and Alpha Wave.

The company is positioning itself as the fastest inference provider in the world. With benchmarking firm Artificial Analysis confirming consistent 20× speed gains over Nvidia GPUs, there might be some truth to that claim. That performance, plus customer traction ranging from AWS and Meta to the US Department of Energy, has created a demand surge. Cerebras says it now serves trillions of tokens per month across its own cloud, on-prem installs, and partner platforms.

The cash will fund more wafer-scale innovation, expanded US manufacturing, and larger data centre footprints, and likely support preparations for the IPO they filed for in August 2024.

Why this matters:

Microsoft CTO Confirms Plan to Wean Off NVIDIA

Microsoft’s CTO just said the quiet part out loud.

Kevin Scott told Italian Tech Week the company’s long-term plan is to move off NVIDIA and AMD, and lean into in-house silicon like its Maia AI accelerators and Cobalt CPUs. For now, NVIDIA still offers the best price-performance, and Microsoft will “literally entertain anything” to meet today’s compute crunch.

Still, the signal is unmistakable:

More Microsoft inside Microsoft, and less revenue for NVIDIA.

Why this matters:

  • We’ve previously discussed how much of NVIDIA’s revenue depends on a handful of hyperscalers, with Microsoft being among the largest buyers.

  • When Microsoft and its peers fill racks with their own chips, NVIDIA’s future volumes and pricing power will be directly at risk.

  • That’s likely why NVIDIA has been busy locking in massive contracts or taking equity stakes with some of its neocloud progeny.

  • Owning and supporting distribution channels outside of the hyperscalers creates a moat, guarantees silicon offtake, and creates competition for those muscling in on Team Green’s turf by indirectly counter-muscling in on theirs.

  • This isn’t really all that surprising, to be clear, but it is interesting to finally have such clear and public confirmation.

Innova Submits Plans for 4.4GW of UK Data Centres

UK renewable energy firm Innova has submitted proposals for ten data centre projects totalling 4.4GW of capacity.

Five of the sites will be hybrid builds co-located with battery storage, while the remaining sites will be standalone. The sites span Manchester, Buckinghamshire, Worcestershire, Durham, Hampshire, Derbyshire, Norwich, Swansea, and South Yorkshire, with connection dates pencilled in between 2029 and 2032.

This is Innova’s first move into data centres. Founded in 2014, the privately held company has focused on solar and battery storage, with a development pipeline exceeding 26GW across the UK. It has grown into one of the country’s largest independent renewables developers, active in both utility-scale solar farms and behind-the-wire BESS deployments.

Why this matters:

  • 4.4GW would put Innova among the UK’s largest proposed AI-ready data centre developers.

  • Co-locating with batteries strengthens the grid case, making approval more likely under the new reforms.

  • The pivot reflects a wider trend: renewable-first players entering AI infrastructure directly, rather than just supplying power to hyperscalers and neoclouds.

The Rundown

The AI infrastructure story to date has been one of giant rounds, mega-contracts, sovereign ambitions, and press releases.

It’s also been incredibly noisy. So noisy, in fact, that it gave me the impetus to start this newsletter.

Because I was tired of wading through announcements of announcements of announcements.

But this week showed something else.

Crusoe went from dirt to silicon in Abilene in record time, transforming Stargate from an OpenAI fundraising tool into something real. Nscale proved that UK startups can play at the same level as their US contemporaries. CoreWeave showed that neoclouds can land multi-billion dollar hyperscale tenants back-to-back. Anthropic and Cerebras reminded everyone that the software and silicon races are still wide open. Renewables developers in the UK are angling for a slice of the action. And finally, Microsoft said out loud what NVIDIA already knows: hyperscalers want less dependency and more control.

The thread that ties it all together?

Execution.

That’s the reason the same names keep cropping up over and over again, while others have fallen by the wayside. They’re actually delivering on what they said they would. And while there’s still liberal use of press releases, there’s a fair bit more substance to what’s being released now versus six months ago.

The starting sprint of the AI infrastructure race is over.

The endurance phase has begun.

Who has the capacity to see things through to the finish line?

We’ll all find out soon enough.

See you next week.

Reply

or to participate