It appears that rumours of NVIDIA‘s death in China were greatly exaggerated.

H200s shipments have been approved. Again. For a fee. Customers are lining up. NVIDIA shareholders are happy, as are Chinese buyers.

Beijing?

Not so much.

I’m Ben Baldieri. Every week, I break down what’s moving in GPU compute, AI infrastructure, and the data centres that power it all.

Here’s what’s inside this week:

Let’s get into it.

The GPU Audio Companion Issue #78

Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.

NVIDIA H200s Return to China On Washington’s Terms

Team Green, Red, and Blue are returning to the Middle Kingdom.

The US will permit H200 exports to China with a strict structure: a 25% fee paid to the US government, mandatory security review on US soil before re-export, and Commerce-approved end customers. AMD and Intel will be treated the same way.

Chinese hyperscalers moved quickly.

Demand is already above current output, triggering Nvidia to consider additional H200 manufacturing capacity, even though TSMC lines are mainly committed to Blackwell and Rubin. However, Beijing has not yet cleared any imports. Regulators also held emergency meetings and may impose conditions, including pairing H200 purchases with domestic accelerators to support local vendors.

But China’s own chips still lag H200 by a wide margin, making approval likely.

Why this matters:

  • Nvidia regains controlled access to its fastest-growing market without compromising its lead architectures.

  • China sees a meaningful uplift over H20-class parts while still operating within US restrictions, though this may slow the development of local alternatives.

  • Washington sets a new template: selective access, high fees, tight oversight, and no path to Blackwell or Rubin, which remain off-limits.

Brookfield, Qatar Launch $20bn AI Partnership

Brookfield and the Qatar Investment Authority have created a $20bn vehicle to build AI infrastructure in Qatar and select global markets.

Brookfield is investing through its new AI fund, which is targeting up to $100bn. Qatar is using its newly formed AI arm, Qai. The resulting JV, in turn, puts Qatar directly in the ring with Abu Dhabi’s G42 and KSA’s Humain, reinforcing Brookfield’s view that AI infrastructure is a multi-trillion-dollar buildout and is best financed through sovereign-scale partnerships.

Why this matters:

  • The Gulf is becoming ever more central to the AI buildout. This move sees Qatar join the UAE and Saudi Arabia in shifting sovereign wealth from passive tech bets to direct ownership of chips, data centres, and offtake.

  • Brookfield gains a partner that can greenlight multi-billion-dollar deployments in a single move, matching the scope and scale of G42, Humain, and Mubadala-backed projects.

  • Given recent export approvals for NVIDIA and AMD hardware to the UAE and KSA for G42 and Humain, respectively, Qatar might be next, further entrenching US regional influence through semiconductors.

Crusoe Buys 1.21GW of Boom’s Supersonic Turbines

Crusoe has become the launch customer for Boom Supersonic’s new 42MW Superpower natural gas turbines.

The turbines are a spinoff of Boom’s supersonic engine work, using the same high-temperature core to maintain full output even in extreme heat. They also run water-free, which is becoming a hard requirement for new AI sites across the US. Crusoe’s order is included in Boom’s $1.25bn backlog and accompanies a fresh $300m funding round that brings the turbine business into full production.

Why this matters:

  • For Crusoe, this is yet another continuation of its energy-first playbook.

  • The company has been chasing faster routes to power than traditional grid interconnects can deliver, and a pre-packaged 42MW module checks that box cleanly.

  • The purchase also fits Crusoe’s broader strategy of reaching multi-GW scale through diversified energy assets rather than relying solely on utility timelines.

OpenAI Drops GPT-5.2 After Last Week’s ‘Code Red’

Following on from last week’s turmoil, OpenAI have released GPT-5.2.

The latest model is positioned as its strongest work model yet and a clear attempt to reassert leadership in the enterprise and agentic-workflow race. The pitch is simple: faster, cheaper outputs than human professionals, better document handling, stronger tool use, cleaner spreadsheet and slide generation, sharper coding performance, longer context, and fewer factual slips. Early partners such as Notion, Databricks, JetBrains, and Shopify are already highlighting gains in long-running workflows and multi-tool orchestration. Benchmarks also show meaningful improvements.

Why this matters:

  • OpenAI is under pressure from all fronts, which means the timing, subtext, and speed of the release cannot be ignored

  • While 5.2 is a step forward, it doesn’t address the bigger strategic pressures and outstanding questions in the enterprise AI space around governance and platform dependence.

  • It also lands into an enterprise environment still wrestling with the basics. Long-context models (along with context engineering) help, but production deployments continue to struggle with multiple issues. 5.2 moves the needle, but the bottlenecks aren’t only model-side.

Intel Moves Forward in SambaNova Acquisition

Intel has signed a non-binding term sheet to acquire SambaNova.

SambaNova was once valued at $5bn during the 2021 funding surge but has since faced tightening demand, declining implied valuations, and a tougher market for custom silicon. That combination created an opening for Intel, which has struggled to build momentum in AI hardware and is now pursuing acquisitions to rebuild an AI-first roadmap.

If the acquisition goes through, SambaNova’s system-level approach gives Intel something it hasn’t had for years:

A credible performance story outside x86 and a pool of engineers who specialise in AI accelerators rather than general-purpose compute.

Why this matters:

Amazon and Microsoft to Invest $52bn in India

Amazon and Microsoft have committed a combined $52bn to India, cementing the country’s rise as the next major front in cloud and AI expansion.

Amazon lifted its investment pledge to $35bn through 2030, spanning retail, logistics, AI-driven digitisation, and a larger AWS footprint. Microsoft followed with a $17.5bn commitment over four years to AI and cloud infrastructure, workforce development, and sovereign tech capacity, marking its largest Asia investment to date.

In return, India gives both companies access to something they can’t get elsewhere at this scale:

A huge digital population, political tailwinds, lighter-touch regulation, and a market not captured by Chinese incumbents.

Why this matters:

  • This deal offers hyperscalers something subtler but increasingly important: model-training diversity.

  • OpenAI, Google, and others are deploying free assistants in India for this reason, as the resulting data capture and usage patterns across languages and accents feed back into their global models.

  • It should be noted that despite these massive numbers, neither Amazon nor Microsoft has shown a clear path to profitability for AI infrastructure in India. But the bet is on ownership of the demand curve, not near-term margin.

SpaceX Preps Record IPO to Fund Orbital DCs

SpaceX is preparing an IPO that could raise well over $30bn at a projected valuation around $1.5tn.

Why now?

Crusoe and Starcloud kicked off the modern race earlier this year with the first orbital H100 deployment. Google then raised the stakes with Project Suncatcher, a 2027 mission designed to test TPUs in orbit. Blue Origin is also getting involved. Now SpaceX is signalling its intent to dominate the category by tying its public-market debut to the capex required to build and launch compute clusters.

Why this matters:

  • Crusoe and Starcloud created the opening, Google validated it, and SpaceX is now moving to industrialise it with public-market capital.

  • If the IPO lands at size, SpaceX becomes the first operator capable of funding orbital compute at meaningful scale.

  • With terrestrial power constraints tightening, other hyperscalers may eventually need an orbital strategy of their own, or risk ceding the frontier to space-native players.

The Rundown

More massive moves of the same varieties as ever.

Semiconductor geopolitics. Sovereign wealth involvement. Neoclouds are doing wild things with energy. Model releases. Acquisitions. Investments. And more talk of space data centres.

It’s interesting to consider how the industry has both changed and stayed the same over the past 12 months.

Because it’s the same names that come up over and over again.

Everyone is sprinting in the same direction. But where does that direction lead? Who knows.

See you next week.

Reply

or to participate