To date, the AI buildout has been led by the usual names. But this week, a “new” player entered the market.

A player who isn’t new at all. A player who has quietly financed half the industry. A player whose arrival has quietly moved the game far beyond isolated GPU deployments, model launches, funding rounds, acquisitions, and data centre builds.

One of the world’s largest asset managers has decided that it wants to own the physical backbone of AI, and it is willing to deploy industrial-scale capital to make that happen.

And while everyone else is building pieces, Brookfield is building the full stack.

I’m Ben Baldieri. Every week, I break down what’s moving in GPU compute, AI infrastructure, and the data centres that power it all.

Here’s what’s inside this week:

Let’s get into it.

The GPU Audio Companion Issue #75

Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.

Brookfield Launches $100B AI Infrastructure Program

Brookfield just opened a new front in the compute race.

The company is rolling out a $100 billion AI infrastructure program in partnership with NVIDIA and the Kuwait Investment Authority. The anchor vehicle is the new Brookfield Artificial Intelligence Infrastructure Fund, targeting 10 billion dollars in equity and already sitting on 5 billion dollars from Brookfield, NVIDIA, KIA and a small group of institutions. The capital stack, however, scales far beyond the fund itself.

That means AI factories based on NVIDIA’s DSX Vera Rubin-ready design, behind-the-meter power, land, data centres, compute capacity for governments and global enterprises and, most importantly, funding.

To deliver on these goals, Brookfield will also stand up Radiant, a new NVIDIA Cloud Partner platform that will deploy DSX-based AI factories and support Brookfield’s sovereign-AI programs. Seed deals are already signed, including a 5 billion dollar, 1 GW behind-the-meter power partnership with Bloom Energy, plus national AI infrastructure commitments in France and Sweden worth up to 30 billion dollars.

The message is simple:

Private capital wants to own the physical backbone of AI, not just finance it from the sidelines.

Why this matters:

  • This is one of the largest dedicated AI infrastructure capital programs announced so far, and it ties one of the world’s biggest asset managers directly into NVIDIA’s DSX pipeline.

  • Brookfield’s sovereign programs and Radiant’s AI-factory buildouts will compete head-on with hyperscalers, sovereign clouds, and the emerging NeoCloud cohort for land, power, and GPUs.

  • The scale signals rising pressure in markets already tight on power and supply, raising fresh questions about long-term stability as 100-billion-dollar commitments start landing in the same places everyone else is trying to build.

NVIDIA Posts Record Q3, Beats Earnings Yet Again

NVIDIA just posted the biggest quarter in its history.

Revenue hit 57 billion dollars, up 22 percent from Q2 and 62 percent year-on-year. Data centre brought in 51.2 billion dollars, up 66 per cent, driven almost entirely by Blackwell demand that is still outpacing supply. Cloud GPU inventory is gone. Every major buyer, hyperscalers, sovereigns, defence, automotive, research labs, and the frontier model shops, is pulling forward orders.

NVIDIA stacked the quarter with wins:

10 GW for OpenAI, 1 GW for Anthropic, seven new Blackwell supercomputers, including a 100,000-GPU DOE system with Oracle, Rubin CPX for long-context workloads, NVQLink for quantum coupling, and the first Blackwell wafers produced at TSMC Arizona.

Q4 guidance lands at $65B. If they meet it, they add another quarter the size of a blue-chip semiconductor company.

Why this matters:

  • NVIDIA is unequivocally the gravitational centre of global compute, with a data centre business bigger than entire chip giants.

  • Frontier labs standardising on Blackwell tighten NVIDIA’s hold over the training stack, and the first US-made Blackwell wafers show domestic manufacturing is moving from political promise to production.

  • The pace of growth raises stability questions: sustained 20–25 percent sequential jumps, record inventories at buyers, and multi-year preorders lock the whole market to a single cycle. If demand ever softens, the correction won’t be gentle.

HUMAIN Takes Centre Stage at US-Saudi Investment Forum

Off the back of the US-Saudi Investment Forum, one thing is clear: HUMAIN are cooking.

The PIF-backed operator will roll out up to 600,000 NVIDIA GPUs across Saudi Arabia and the United States over the next three years, including GB300 systems connected via Quantum-X800. The deal expands HUMAIN’s partnership with NVIDIA and marks its first US footprint. To that end, the company is also teaming up with Global AI to build US data centres designed for high-density GB300 clusters.

In parallel, HUMAIN and xAI will develop a 500 MW Saudi site that anchors xAI’s first scaled deployment outside America.

AWS also joins the mix with plans to host and manage up to 150,000 NVIDIA GPUs inside a dedicated AI Zone in Riyadh.

On the software side, HUMAIN will train its Arabic models on Nemotron and plug Omniverse into its national digital twin projects, covering energy, logistics, aviation, and large-scale infrastructure. Then, there’s a new JV with AMD and Cisco, plus further Groq deployments, among other things. And, finally, HUMAIN led the $900m Series C in Luma AI, a startup pursuing multimodal general intelligence, i.e. AI that can generate, understand, and operate in the physical world.

To that end, Luma AI will become a customer of HUMAIN as HUMAIN builds Project Halo, a 2GW AI supercluster in Saudi Arabia.

To summarise, the brakes are off for KSA, and the PIF are done playing.

Why this matters:

  • HUMAIN now controls one of the largest forward GPU commitments in the market, across two continents.

  • The company becomes a central partner to xAI, NVIDIA, AWS, and Global AI, creating a multi-stack supply chain that few operators can match.

  • Saudi Arabia moves closer to its goal of sovereign compute leadership, with a US presence now giving HUMAIN reach, redundancy, and credibility.

AMD Launches Enterprise AI Suite, Challenges NVAIE

AMD is pushing deeper into enterprise AI software with the launch of the Enterprise AI Suite.

The no-cost, open-source stack is designed to move companies from metal to production in minutes. The suite packages four components (Solution Blueprints, Inference Microservices, AI Workbench, and Resource Manager) with a focus on faster deployment, higher GPU utilisation, and avoiding lock-in. The new suite also brings prebuilt inference containers with OpenAI-compatible APIs, dynamic dev workspaces for fine-tuning, and a scheduler tied into Ray for distributed workloads.

AMD is positioning this as the missing layer that lets MI-series hardware compete for production workloads rather than just for experimental workloads.

Solution Blueprints are already live on Vultr, meaning enterprises can spin up validated AMD-based architectures immediately without custom engineering, dealing with a major friction point for MI300X adoption.

While the real impact of the Enterprise AI suite remains to be seen, it’s good to see Team Red bringing the fight to Team Green. Competition is a good thing for everybody. Considering NVIDIA’s quarterly results above, the market certainly needs it.

Why this matters:

  • AMD finally has a full enterprise software layer to pair with MI300X clusters, reducing the gap with NVIDIA’s ecosystem.

  • Vultr support gives AMD immediate distribution, a practical route to market, and a way into the hands of the ICP everyone is chasing: enterprise customers.

  • At the same time, enterprises get a cleaner path to production without relying on proprietary stacks or custom tooling, and some much-needed optionality in silicon choice.

Click here to read the full announcement from AMD.

NVIDIA Teases Early Vera Rubin Arrival

NVIDIA is edging Vera Rubin forward faster than expected.

Jensen Huang told Bloomberg the full platform is now tracking for a Q3 2026 launch, not late 2026 as first signalled. Every Rubin component is taped out. The dual-chiplet GPU with 288 GB HBM4, the Rubin CPX accelerator, the new 88-core CPU, BlueField-4, next-gen NVLink switching, and updated Ethernet and InfiniBand adapters are all ready to go.

And NVIDIA already has working silicon in labs across teams.

Huang says roughly 20,000 engineers are bringing the platform up across silicon, systems, software, and algorithms.

NVIDIA expects Rubin to be central to its push toward $500 billion in compute-GPU sales by the end of 2026, even with China set to zero in its forecasts. US bans on AI chips and China’s own restrictions on foreign hardware have cut off one of NVIDIA’s largest former markets, but Huang claims global demand outside China is strong enough to hit the target.

Why this matters:

  • Vera Rubin's early arrival means NVIDIA keeps its annual cadence intact, which puts pressure on AMD, Intel, the hyperscalers, and every neocloud to refresh their roadmaps.

  • Cutting China from revenue forecasts without slowing targets shows how hot the rest of the market is running, but it also exposes NVIDIA to concentration risk if demand cools elsewhere.

  • A Q3 launch lines Rubin up to be the default buildout platform for 2027 clusters, locking in another cycle of NVIDIA-first designs across hyperscalers, sovereign clouds, and AI factories. Kyber racks might become the standard sooner than expected.

Google Launches Gemini 3, Antigravity IDE, Taipei Hub

A busy week for Google.

Gemini 3 rolled out across everything: Search, Workspace, Vertex, Android Studio, Firebase, the Gemini app, and the CLI. Google is treating this as the new backbone of its stack. With upgrades including deeper long-context reasoning, native multimodality across all inputs, and full agentic execution for multi-step workflows, the early numbers explain the magnitude of the rollout:

  • 19/20 benchmark wins against GPT-5.1 and Claude Sonnet 4.5.

  • ARC-AGI 2: 31.1% (Pro) and 45.1% (Deep Think) — a 2–3× jump over the nearest competitor.

  • Humanity’s Last Exam: 37.5% vs GPT-5.1’s 26.5%.

  • SimpleQA Verified: 72.1%, roughly a 40% lead on factuality.

These aren’t marginal lifts. ARC-AGI, in particular, is where models typically plateau. A leap this size is rare.

But that’s not all.

The new IDE is Google’s answer to the question of AI-augmented development. It brings browser automation, multi-agent orchestration, and structured “Artefacts” (plans, screenshots, walkthroughs) so developers can see how the agent reached its output.

Reception has been mixed, with some users flagging that it’s basically another VS Code fork with Google’s agent stack bolted in.

The rollout hasn’t been the cleanest either, with users experiencing login failures, authentication loops, and hitting rate limits after just a few prompts. But Antigravity does ship with multi-model support, letting developers switch between Gemini 3 Pro, Claude Sonnet 4.5, and OpenAI’s GPT-OSS.

Finally, as if a new suite of models and IDE weren’t enough, Google is building its largest AI-infrastructure hardware engineering hub outside the US.

And it’s landing in Taipei.

The new site will house hundreds of engineers working across system design, validation, manufacturing interfaces, and data-centre deployment. Taiwan gives Google a rare full-stack advantage, with chip design talent, ODM ecosystems, semiconductor supply chains, and direct links into its existing APAC data-centre footprint, all in one place. The new hub will feed hardware into Google’s global fleet, including the systems that power Search, YouTube, Gemini, and its next wave of AI data-centre upgrades.

Why this matters

  • Rough launch aside, this is Google’s most coherent AI push in years. A unified frontier model, with full-stack integration, an agentic IDE, and numbers that genuinely move the needle, all in the same week, throws down the proverbial gauntlet to OpenAI.

  • On the hardware side, Google already runs its first APAC data centre in Taiwan and has anchored multiple subsea cable projects that route traffic through the island. Now, Google is centralising core AI-hardware design in Taiwan, strengthening a supply-chain base that already builds much of the world’s high-density compute.

  • This tightens Google’s control over the entire AI stack at a time when hyperscalers and model builders alike are racing to ship custom silicon and specialised racks at scale to minimise the NVIDIA toll many must pay to play in the AI infrastructure arena.

GMI Cloud to Build $500M AI Data Centre in Taiwan

GMI Cloud is investing $500M in a new AI data centre in Taiwan, powered by 7,000 NVIDIA GB300 GPUs.

The 16MW site, due online by March 2026, will process nearly 2M tokens per second. This is GMI’s biggest swing yet as a fast-rising NVIDIA Cloud Partner, anchoring GMI’s growing Asia footprint. Local demand clearly recognises the value in this deployment, with heavyweight local anchors such as Wistron, Chunghwa Telecom, Trend Micro, and TECO already committing to offtake.

CEO Alex Yeh called data centres “strategic assets” for Taiwan’s AI future, noting near-full GPU utilisation across existing sites. GMI also plans a 50MW US facility and an IPO within three years.

Why this matters:

  • GMI joins the growing class of NVIDIA-backed NeoClouds building regional “AI factories” around the world.

  • The Taiwan build reinforces the island’s role as a critical AI hub, despite power-supply challenges.

  • With local and global clients pre-booked, the project shows sustained demand for Blackwell infrastructure, even amid talk of a broader market slowdown.

The Rundown

The centre of gravity in AI infrastructure is shifting toward the firms that control real assets - the ones with land banks, grid access, power contracts and balance sheets that bend regions into shape.

Brookfield’s 100 billion dollar program sets a new benchmark. Financing, power production, real estate, data centre construction, sovereign deals and a GPU cloud tied together through Radiant form a pipeline that few operators can match today. And they’re not moving alone.

Sovereigns are locking in forward GPU commitments.

New AI zones are taking shape globally. NVIDIA is on pace to reach half a trillion dollars in compute sales. HUMAIN, xAI, AWS and Global AI are building clusters across multiple continents.

This phase of the race belongs to whoever can assemble the entire chain. Capital. Power. Land. Steel. Silicon. Software. The players that cannot will rent capacity from those that can.

Buildouts reward operators who control the inputs, and right now, the leverage sits with those who can take a project from grid to GPU without relying on anyone else.

That’s why physical infrastructure is where the real power sits, because while the narrative above the stack moves fast, the foundations underneath ultimately decide who wins.

See you next week.

Everything Else

Reply

or to participate