The market’s still growing, but the hedges are getting thicker.

Backstops. Sale-leasebacks. Equity loops. Market-share collapse. Layoffs.

All signs of an industry that’s increasingly wary of its own momentum.

I’m Ben Baldieri, and every week I break down the moves shaping GPU compute, AI infrastructure, and the data centres that power it all.

Here’s what’s inside this week:

Let’s get into it.

The GPU Audio Companion Issue #70

Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.

NVIDIA and OpenAI’s $1 Trillion Circle Tightens

NVIDIA is in talks to backstop OpenAI’s financing and lease obligations.

The story, as reported by the Wall Street Journal, is the latest in Jensen’s circular saga. If the agreement becomes a reality, Team Green will allegedly guarantee loans and chip purchases that underpin OpenAI’s litany of agreements. Per a recent Bloomberg article, there are a lot of them:

  • NVIDIA ↔ OpenAI: NVIDIA invests up to $100B in OpenAI, which then commits to filling new data centres with millions of NVIDIA GPUs.

  • OpenAI ↔ Oracle: A $300B cloud deal, where Oracle spends billions on NVIDIA chips to host OpenAI’s workloads, sending revenue straight back to NVIDIA.

  • OpenAI ↔ CoreWeave: OpenAI signs contracts worth up to $22.4B for compute.

  • OpenAI ↔ AMD: Tens of billions committed to AMD GPUs, with OpenAI receiving an equity option in AMD itself.

Throw in the xAI SPV, the CoreWeave and the Lambda backstop deals, and you end up with a very tight and potentially tangled funding circle. Each one keeps capital circulating, inflates valuations if you’re cynical, and ensures no single player can fall without pulling the others down.

Why this matters:

Anthropic and Google Million TPU Cloud Deal

Anthropic has signed a multi-tens-of-billions cloud deal with Google.

Per Bloomberg, the deal gives Anthropic expanded access to Google’s TPU clusters to power future Claude models, continuing the tight partnership between the two companies. Google has already invested $3B into Anthropic since 2023 and remains both a shareholder and primary cloud provider. The proposed contract would mark the latest escalation in Anthropic’s capital race, coming just after its $13B funding round led by Iconiq, Fidelity, and Lightspeed that tripled its valuation to $183B.

Why this matters:

  • NVIDIA is currently the only game in town, and all the major players are looking to reduce reliance on a would-be competitor’s product and mitigate the risks of single points of failure.

  • Through this deal, Anthropic reduces reliance on a single compute provider while gaining the benefits of a heterogeneous silicon portfolio, and Google lands a marquee customer for up to 1 million TPUs. Win-win.

Crusoe Takes the Cloud to Space, Raises $1.375B

Compute is headed into orbit. Literally and financially.

Crusoe just announced the first public cloud in space, in partnership with Starcloud, a company building orbital data centres powered entirely by solar energy. Under the deal, Crusoe Cloud will run a dedicated module aboard a Starcloud satellite launching in late 2026, with limited GPU capacity available from orbit by early 2027. The system will use direct solar power to run compute workloads off-Earth, bypassing terrestrial constraints on energy supply, cooling, and land.

Cully Cavness, Crusoe’s co-founder and COO, framed it as a natural extension of the company’s “energy-first” strategy:

If Crusoe’s on-Earth model is co-locating data centres with underutilised power sources, space is the ultimate version of that idea - infinite solar input, zero grid dependency.

Starcloud’s orbital data centre is a self-contained satellite platform designed to host high-performance compute modules without any connection to ground-based infrastructure. The company will also make history next month by launching the first NVIDIA H100 GPU into orbit, delivering 100x more AI compute than any previous deployment in space.

A big week all around.

Why this matters:

  • Power is the only bottleneck that matters in the AI infrastructure buildout.

  • Space-based compute makes sense because it sidesteps terrestrial power constraints, offering an unlimited supply of clean energy for AI workloads.

  • Armed with fresh capital, Crusoe head into Q4 with the ample dry powder needed to make all these plans a reality.

AI Browsers Are the New Gold Rush

Every major AI company now wants its own browser.

In the span of a week, OpenAI launched Atlas and Microsoft rebranded Edge with Copilot Mode. Each promises the same thing: a web experience that thinks for you. Summarising, comparing, buying, booking, and automating in real time. Perplexity’s Comet started the trend. Its viral success, and failed bid to acquire Chrome, showed investors what “AI-native browsing” could look like.

The majors have now followed suit, each pitching a slightly shinier version of the same product: an embedded AI assistant that sees what you see, reads what you read, and acts on your behalf.

But as the features grow more agentic, so do the risks.

Security researchers have already demonstrated prompt injection exploits that let malicious websites hijack AI agents to access passwords, cloud accounts, or banking sessions. Brave researchers showed that even a Reddit comment could trigger a cross-domain action, overriding browser protections entirely.

Then there’s surveillance.

AI browsers don’t just collect search queries. They collect context. What you read. What you click. What you think about next. And with ad-serving integrations on the horizon, the incentives are clear: the more your browser knows about you, the more valuable you become.

Why this matters:

  • AI browsers are the new data collection battleground, but few are ready for the privacy and compliance load that comes with “agentic” web access.

  • The reason for this is the security model is broken by design: LLMs can’t yet separate trusted and untrusted content.

  • As the majors race to own your browsing layer, AI is likely to become the most invasive form of surveillance capitalism yet.

NVIDIA’s China GPU Market Share is now 0%

NVIDIA’s grip on the Chinese AI market has effectively collapsed.

Speaking to Citadel Securities, Jensen Huang confirmed that the company’s 95% market share in China has fallen to zero, following the extension of US export restrictions on H100, A100, and H20 GPUs. NVIDIA generated $17 billion in Chinese revenue last year, roughly 13% of its total, but said in its latest SEC filing that it recorded no H20 sales in Q2 and took a $4.5 billion charge related to the export ban. Huang called the result a “policy failure,” arguing that “America has lost one of the largest markets in the world.”

The loss is quickly being filled.

Domestic firms like Huawei, Biren, and Alibaba are now testing architectures optimised for local conditions.

Alibaba this week claimed its new GPU pooling system cut NVIDIA GPU usage by 82%, by dynamically allocating compute between inference and training workloads. The system reportedly boosts utilisation rates to above 90%, allowing Alibaba Cloud to reduce the number of active NVIDIA GPUs required to achieve the same output.

Why this matters:

  • NVIDIA’s China business has effectively been erased, removing a major pillar of its global revenue base.

  • With export restrictions still in place and local innovation rising, China’s AI industry is decoupling in real time and keeps finding ways to do more with less.

  • If the innovation trend continues, how long US companies will remain hardware-capability-dominant remains to be seen.

Meta’s $27B Financial Engineering Project

Meta and Blue Owl made headlines this week for a $27 billion AI infrastructure investment that both did and did not happen.

The announcement of the Hyperion Data Centre in Louisiana was pitched as a landmark commitment to AI. On paper, at least. The reality? Financial engineering dressed as innovation.

Per Stephen Klein, founder of Curiouser.ai’s LinkedIn post, here’s what really happened:

  • Blue Owl put in $7B cash for an 80% stake.

  • Meta contributed land and construction-in-progress for 20%.

  • Meta immediately pocketed a $3B cash distribution.

  • Meta will lease the site back on rolling four-year terms, backed by a 16-year value guarantee.

Net result? Meta spent nothing, removed the asset from its balance sheet, and walked away $3B richer. All while declaring a multi-billion-dollar “AI investment.” And the timing wasn’t subtle.

Tuesday: “$27B AI investment” announcement.

The cuts hit FAIR (Yann LeCun‘s lab), AI infrastructure, and product - the very teams responsible for turning 5GW of planned capacity into something useful. Meta’s Chief AI Officer Alexandr Wang justified the move by claiming the company had become “too bureaucratic” and that “fewer conversations” would make it more agile. All while his new TBD Lab is expanding aggressively, hiring ex-OpenAI and Thinking Machines talent on equally premium terms.

Why this matters:

  • The message couldn’t be clearer: public conviction, private caution.

  • Meta is hedging its bets with off-balance-sheet financing and sale-leaseback structures, the same playbook other tech giants now use to keep capital-light while appearing all-in on AI.

  • The layoffs hit the beleaguered unit Zuckerberg was recruiting for using nine-figure pay packets, so the “efficiency” story plays well for optics.

Applied Digital’s Mystery $5B Hyperscale Customer

Applied Digital just signed a $5 billion, 15-year agreement with an unnamed US-based investment-grade hyperscaler.

The deal covers 200MW of critical IT capacity at the company’s Polaris Forge 2 campus in North Dakota, with the tenant holding first refusal on another 800MW, enough to scale the site to 1GW total. Once complete, Polaris Forge 2 will span 900 acres, feature a PUE of 1.18, and operate with near-zero water use, making it one of the most efficient builds in the US AI infra pipeline. The campus will phase in power from 2026–2027, adding to Applied’s existing 400MW under lease with major hyperscalers across its North Dakota sites.

Why this matters:

The Rundown

Every headline this week carried the same subtext: control the downside.

Let’s break down how:

  1. NVIDIA and OpenAI are building a trillion-dollar flywheel of circular guarantees - impressive on paper, fragile in practice.

  2. Anthropic’s $50B TPU play is less about scale, more about redundancy - a way to spread exposure before the music stops.

  3. Crusoe’s orbital ambitions are a hedge against Earth’s biggest bottleneck - power.

  4. All the major AI players are targeting AI browsers to minimise competitor share of the AI browser market.

  5. NVIDIA’s China collapse showed how fast market share can evaporate when politics enters the stack.

  6. Meta’s $27B “investment” turned out to be a liquidity exercise disguised as conviction - cash out first, spin the story later.

  7. Even Applied Digital’s $5B lease fits the theme - hyperscalers want the capacity, but not the capex.

Everyone’s still scaling, hard, but the smartest players are doing it with one eye on the worst-case scenarios.

See you next week.

Reply

or to participate