OpenAI cancelled Stargate UK this week, citing the usual reasons:
Energy costs. Regulation. Planning delays. Every outlet took the official reason at face value, and the British press spent Thursday explaining why Britain wasn't good enough.
Yet four days earlier, The New Yorker asked whether Sam Altman can be trusted.
Given other recent announcements and decisions at ChatGPT HQ, there might be more going on here than most reporting would have you believe.
I'm Ben Baldieri, and every week I break down the moves shaping GPU compute, AI infrastructure, and the data centres that power it all.
Here's what's inside this week:
Let's get into it.
The GPU Audio Companion Issue #101
Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.
OpenAI Pauses Stargate UK - The Energy Cost Framing Is Selective
Not a great week for UK AI, on the surface, at least.
OpenAI said this week it is pausing its Stargate UK data centre project, citing "regulation and the cost of energy." Stargate UK was announced in September 2025 alongside Nscale and NVIDIA, with 31,000 GPUs allocated to the UK delivery. The project was set to be built in one of Starmer's flagship AI Growth Zones.
The Bloomberg piece frames the pause as a blow to Britain's AI ambitions.
It is, but it is also the fourth OpenAI pullback in six weeks, and the fourth where the stated reason should be viewed through a market-wide lens.
In March, OpenAI and Oracle scaled back the Abilene Stargate campus from a planned 2GW to 1.2GW. Microsoft then stepped in to lease ~700MW of the freed capacity from Crusoe. The same week, OpenAI shut down Sora. Disney, which had been in discussions about a $1 billion investment, was reportedly blindsided by the decision. The company then raised $122 billion at an $852 billion valuation (Issue #100), with Amazon, not Microsoft, taking the exclusive cloud partner slot.
That indicates the Microsoft-OpenAI rift is now a chasm, and that OpenAI and Oracle are sharing Abilene with a competitor.
Then, on April 6, four days before the UK pause, The New Yorker published Ronan Farrow and Andrew Marantz's profile "Can Sam Altman Be Trusted?"
The piece, based on interviews with more than 100 sources and previously unreported memos from Ilya Sutskever's 2023 effort to remove Altman from the company. The allegations are serious: a pattern of misrepresentation, "sociopathic" disregard for consequences, furtive efforts to block AI regulation, ties to autocracies that may have disqualified Altman from a security clearance, and internal plans to sell AI to foreign governments, potentially including Russia and China.
Ouch.
Why this matters:
Viewed through this broader lens, the UK pause seems less a policy failure in Westminster and more a commercial decision in San Francisco.
OpenAI’s commercial commitments are huge, so it’s not entirely surprising we’re seeing some emergency amputations. That being said, it’s important to remember that clickbait headlines and the lampooning of policymakers by the media can be an incredibly powerful distraction from other, more significant matters.
So, while the UK has real systemic issues, this framing appears to leverage those impediments to growth as a useful smokescreen for knee-jerk commercial reorientation decisions by a CEO with increasingly questionable character in the face of accelerating market-share loss and intensified competition. Or something like that.
CoreWeave Lands Both Meta and Anthropic, Locks In Early Vera Rubin
CoreWeave closed two of the biggest AI infrastructure deals of the year within 24 hours of each other.
CoreWeave and Meta announced an expanded agreement on April 9, providing AI cloud capacity through December 2032 for approximately $21 billion, extending the existing relationship and including "some of the initial deployments" of the NVIDIA Vera Rubin platform. Capacity will be distributed across multiple locations.
The next day, CoreWeave and Anthropic announced a multi-year agreement to support the development and deployment of Anthropic's Claude family of models. Compute comes online later this year in a phased rollout with the potential to expand over time. With Anthropic onboard, nine of the top ten AI model providers now run on CoreWeave.
Why this matters:
The $8.5 billion A3-rated term loan CoreWeave closed in March was backed by Meta contracts worth at least $19 billion (Issue #100). That collateral package is now $21 billion and runs through December 2032, longer than the March 2032 debt maturity.
Adding Anthropic as a second anchor tenant on a multi-year basis diversifies the customer concentration that Moody's flagged in the original rating.
Anthropic is picking up a third major compute partner in six months. Amazon is the primary infrastructure partner after the $4 billion investment. Google and Fluidstack handle TPU workloads. CoreWeave is now the NVIDIA GPU leg.
Firmus Raises $505 Million From Coatue, NVIDIA Participating
The APAC neocloud story added a second billion-dollar entry in consecutive weeks.
Firmus announced a $505 million strategic equity raise led by Coatue, with NVIDIA participating subject to closing conditions, at a $5.5 billion post-money valuation. The investment supports Project Southgate's national rollout across Australia and additional Asia-Pacific deployments based on the NVIDIA Vera Rubin DSX reference design. Total equity raised in the past six months: $1.35 billion. This sits on top of the $10 billion Blackstone-led debt facility closed in February.
Why this matters:
NVIDIA's equity playbook has a fourth confirmed entry. $2 billion into CoreWeave (Issue #86). $2 billion into Nebius (Issue #96). $2 billion into Marvell (Issue #100). Now Firmus.
The qualifying condition is implicit: operators committing to Vera Rubin at gigawatt scale get NVIDIA equity participation.
Two consecutive weeks of billion-dollar APAC neocloud news from two different companies. Last week Sharon AI disclosed Canva plus the $1.25 billion ESDS contract (Issue #100). This week Firmus raises at $5.5 billion. Different customer segments, same country, same structural thesis. The APAC sovereign compute market is now a real ecosystem.
Anthropic Ships Mythos via Project Glasswing With 12 Launch Partners
The model we covered as an accidental leak last week just became the centrepiece of a coordinated Western cyber-defence initiative/incredible PR campaign after a crown jewel was lost.
Anthropic announced Project Glasswing, bringing together AWS, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks to use Claude Mythos Preview for defensive cybersecurity work. Anthropic is committing $100 million in usage credits plus $4 million in direct donations to open-source security organisations.
The model will be priced at $25 per million input tokens and $125 per million output tokens after the research preview - five times Opus 4.6.
Why this matters:
The Mythos leak thread closes on Anthropic's terms. Fortune revealed "Mythos" in a public data cache ten days ago (Issue #100). Anthropic turned the forced disclosure into a 12-partner consortium anchored by Apple, JPMorgan, and three direct model competitors - a positioning win that reframes last week's embarrassing leak as this week's safety leadership announcement.
Mythos Preview has already identified thousands of zero-day vulnerabilities. Anthropic's Frontier Red Team disclosed three patched examples: a 27-year-old vulnerability in OpenBSD that allowed remote crashes, a 16-year-old vulnerability in FFmpeg that survived more than five million automated test runs, and an autonomous privilege escalation chain in the Linux kernel.
SambaNova and Intel Unveil Three-Way Disaggregated Inference Blueprint
The heterogeneous silicon thesis just received another validation.
SambaNova and Intel announced a joint hardware solution that assigns each phase of agentic AI inference to purpose-built silicon: GPUs for prefill, SambaNova RDUs for decode, and Intel Xeon 6 processors for agentic tool execution and orchestration. The design ships in H2 2026 to enterprises, cloud providers, and sovereign AI programmes, and critically, it deploys in standard air-cooled data centres rather than the purpose-built liquid-cooled facilities NVIDIA's GB300 NVL72 racks require.
Why this matters:
This is the third disaggregated inference architecture to ship in six weeks.
AWS and Cerebras announced Trainium-for-prefill plus CS-3-for-decode in March (Issue #98). NVIDIA unveiled the Groq 3 LPX at GTC 2026 (Issue #97). SambaNova and Intel now ship three-way disaggregation with dedicated silicon for the agent tool layer. The homogeneous GPU cluster is a legacy pattern for production inference.
Air-cooled deployment is the competitive moat. GB300 NVL72 needs liquid cooling, 130kW+ racks, and purpose-built facilities. Most data centres worldwide can't host it. Rubin LPX systems are even more challenging. SambaNova and Intel deploy in the enterprise facility you already own. For sovereign AI programmes that need to keep data in-country but can't build new facilities, this is the only production-ready inference architecture that fits the constraint.
Meta Ships Muse Spark, First Frontier Model From Meta Superintelligence Labs
The capex finally has a flagship model to justify it.
Meta released Muse Spark, the first model from Meta Superintelligence Labs and the first in what Meta calls "the Muse family." The release is described as "the first product of a ground-up overhaul of our AI efforts." Muse Spark is natively multimodal with tool-use, visual chain-of-thought, and multi-agent orchestration. In Contemplating mode, it scores 58% on Humanity's Last Exam. Meta claims the new pretraining stack delivers the same capabilities as Llama 4 Maverick with more than an order of magnitude less compute.
Why this matters:
Llama 4 Maverick was the last major model release and it underwhelmed. The Llama brand is dead. Muse is the new line.
Meta has spent 18 months buying every form of AI compute on the market: NVIDIA Blackwell and Rubin, AMD Instinct at 6GW (Issue #93), Broadcom co-designed MTIA, in-house MTIA 300 through 500 (Issue #96), Arm AGI CPU partnership (Issue #99), $27 billion Nebius deal (Issue #98), and $21 billion CoreWeave extension this week. Those investments might finally be paying off.
Muse Spark shipped directly into meta.ai and the Meta AI app on launch day. API access is private preview only. That's the opposite of OpenAI's consumer-plus-enterprise playbook and the opposite of Anthropic's enterprise-plus-API playbook. Meta has a billion-user consumer distribution channel no other frontier lab can match, and it's using frontier capability as the weapon to hold it.
Iran Threatens Stargate UAE, Hits AWS and Oracle in Bahrain and Dubai
The first extended conflict in the AI-infrastructure era is hitting data centres directly.
On April 7, Iran's Khatam al-Anbiya Headquarters released a video naming the 10-square-mile UAE-US AI Campus, currently under construction, as a legitimate target. Up to 1GW of the site is allocated to OpenAI under Stargate, with the wider campus potentially scaling to 5GW. The video showed images of US executives including Sam Altman and Jensen Huang. Days earlier, Iranian strikes hit AWS data centres in Bahrain and Dubai, with Amazon declaring "hard down" status on multiple zones. Oracle took shrapnel damage at a Dubai facility. The Strait of Hormuz disruption is affecting aluminium, helium, and LNG flows - all semiconductor supply chain inputs.
Why this matters:
Physical security for hyperscale AI infrastructure just moved from a compliance checkbox to a strategic planning constraint. AWS declared "hard down" on production zones with no timeline for restoration. A state actor released a propaganda video with pictures of named US tech executives. Every site selection decision at every operator globally just acquired a new variable.
Of OpenAI's four flagship Stargate locations, one is operational but partially Microsoft's, one is paused, one has gone silent, and one just got named as a legitimate military target. The Stargate thesis as announced in January 2025 does not survive 2026's geopolitics.
The supply chain consequences will outlast the conflict. Aluminium, helium, and LNG are first-order inputs to chip fabrication. Every gigawatt-scale AI factory under construction globally depends on a supply chain that routes through this region. The war is not being priced into anyone's 2027 capacity forecasts. It should be.
The Rundown
OpenAI is the first frontier lab to hit a wall that isn't compute.
They have $122 billion. They have NVIDIA on speed dial. They have the product that made "AI" a household word. And this week, they walked away from Britain, got a New Yorker cover story about whether the CEO can be trusted, and watched Anthropic sign the cloud provider they don't (yet) use.
Everyone spent three years assuming the biggest bottleneck was GPUs, then cash, then power.
Turns out it's none of those.
It's whatever OpenAI is dealing with internally that nobody on the outside has a clean name for yet. Execution. Trust. Politics. Partner relationships. Pick your flavour.
The frontier lab with the biggest war chest in history looks to be busy doing emergency amputations.
And the rest of the market is realising OpenAI is no longer the centre of gravity it once was.
See you next week.

