• The GPU
  • Posts
  • Issue #32: OpenAI’s $40B Raise, AMD's $4.9B Buy, & Laser Cooled Chips

Issue #32: OpenAI’s $40B Raise, AMD's $4.9B Buy, & Laser Cooled Chips

Feat. OpenAI, AMD, ZT Systems, Fluidstack, Macquarie, DreamActor-M1, Maxwell Labs, Sandia Labs, Google, CoreWeave, and Homer City

OpenAI just closed the biggest private funding round in history.

$40 billion raised. SoftBank, Thrive, and a who’s who of global finance backing it.

That alone would be headline-worthy.

But, there’s more.

Google is renting GPUs from CoreWeave. Fluidstack and Macquarie are striking deals. AMD sealed the deal for ZT Systems. And Sandia Labs is cooling chips with lasers.

Yes, lasers.

Here’s what’s inside this week:

Table of Contents

Let’s get into it.

The GPU Audio Companion Issue #32

Want the GPU breakdown without the reading? The Audio Companion does it for you—but only if you’re subscribed. If you can’t see it below, fix that here.

OpenAI Closes $40 Billion Round

The most valuable AI company just pulled off the largest private fundraise in history.

OpenAI has officially closed its $40 billion round, led by SoftBank and supported by top-tier backers including Microsoft, Coatue, Altimeter and Thrive. This closes the loop on its capped-profit transition, freeing it to raise from institutional LPs at scale. The raise sets the stage for its Stargate expansion, international infrastructure buildouts, and further model releases.

Why this matters:

  • $40B gives OpenAI an enviable war chest as we head into the next phase of AI development.

  • Yet as OpenAI’s burn rate is astronomical, who knows if it will be enough.

  • The funding amount is contingent on OpenAI’s successful transition to a for-profit company, so if things don’t work out, the round could drop to $20B.

AMD Acquires ZT Systems

AMD is entering the rack-scale game.

AMD has completed its acquisition of ZT Systems, giving it the vertical integration it’s lacked compared to Nvidia. The move brings in rack-scale systems design, open-source ROCm integration, and a manufacturing footprint AMD aims to divest separately. With $500 billion projected in AI accelerator spend by 2028, this is AMD’s systems-level swing at the burgeoning data centre GPU market.

Why this matters:

  • AMD can now deliver full-stack AI infrastructure, not just chips.

  • This means AMD may yet mount a meaningful challenge to Nvidia’s rack-scale solutions like the GB200 NVL72 systems.

  • Considering the MI325X accelerators are showing promising performance signs at lower price points than Nvidia equivalents, we may yet see a similar trend for rack-level solutions.

Fluidstack Inks Funding Deal With Macquarie

Europe’s AI buildout just got a financial boost.

Fluidstack has teamed up with Macquarie to fund the rollout of GPU clusters powering AI labs across the continent. The partnership focuses on enabling AI labs, researchers, and early-stage teams to access cutting-edge compute without the upfront cost. Fluidstack will operate the clusters using Nvidia GPUs, while Macquarie provides structured financing behind the scenes.

Why this matters:

  • Hardware financing is the great enabler for GPUaaS platforms to scale globally.

  • Macquarie’s backing brings institutional credibility to Fluidstack’s operation.

  • Fluidstack’s positioning in Europe could eventually challenge the larger regional players like Nebius.

AI Animation (and Deep Fakes) Inch Closer

AI animation tools are moving past talking heads.

A new paper out of Zhejiang University and ByteDance introduces DreamActor-M1, a diffusion-transformer-based framework that can generate holistic, expressive, and long-term consistent human image animations. Think 3D body skeletons, head spheres, facial embeddings, and temporal references to produce smooth and identity-preserving movement. across full-body, upper-body, and portrait scales.

Why this matters:

  • The outputs DreamActor-M1 produces are a step-change in AI animation capabilities.

  • There’s now a clear path toward more realistic, controllable character generation for gaming, virtual humans, and digital twins.

  • On the more worrying side, the same tools could, and likely will, be used for increasingly sophisticated deep fakes.

Maxwell Labs Tests Laser Cooling With Sandia Labs

Lasers might be the future of chip cooling.

Maxwell Labs is working with Sandia and UNM to test laser-based photonic cold plates for GPU cooling. The idea is to replace or complement water and air cooling with microscopic structures that channel cooling light to localised hot spots. If successful, the method could reduce energy and water use while pushing chip performance further than previously possible .

Why this matters:

  • Cooling accounts for up to 40% of data centre energy costs. This could change that.

  • Photonic cooling allows for pinpoint thermal management, unlocking higher GPU throughput.

  • It’s one of the few innovations that directly addresses the power and thermal wall of modern chips.

Google to Rent Blackwell GPUs From CoreWeave

Even Google’s running out of GPUs.

Google is in advanced talks to rent Nvidia’s Blackwell chips from CoreWeave, giving it access to additional capacity without waiting on internal supply chains. It’s also negotiating to colocate its own TPUs within CoreWeave’s facilities. This follows mounting signs that hyperscalers are capacity-constrained and looking to neoclouds to fill the gap.

Why this matters:

  • CoreWeave is now a critical GPU supplier to the world’s largest cloud platforms.

  • Google’s demand outpacing its own infrastructure reinforces Nvidia’s lock on the high end.

  • This sets further precedent for other hyperscalers to pursue neocloud leases when supply is tight.

Homer City’s 4.5GW Gas-Powered AI Campus

This is what it looks like to go all in.

Homer City Generation and Kiewit have unveiled plans for a 4.5GW AI data centre complex in Pennsylvania. Built on the site of a former coal plant, the campus will run on natural gas and include on-site substations and transmission. Construction is expected to begin this year, with the first capacity coming online by 2026.

Why this matters:

  • 4.5GW is more power than some countries. This campus could host hundreds of thousands of GPUs.

  • Natural gas may be controversial, but it’s proving essential for immediate AI buildouts.

  • The repurposing of old fossil fuel sites is now a blueprint for hyperscale expansion.

The Rundown

This week was all about scale:

Capital scale, capacity scale, and technical scale.

OpenAI now has a war chest big enough to build its own compute empire. AMD just secured the racks to match Nvidia’s vertical edge. Google’s renting chips from CoreWeave. And Fluidstack’s using structured finance to get GPUs into the hands of researchers.

Meanwhile, lasers might cool your next GPU, and Pennsylvania’s about to be home to one of the largest AI campuses on Earth.

It never ends.

See you next week.

Keep The GPU Sharp and Independent

Good analysis isn’t free. And bad analysis? That’s usually paid for.

I want The GPU to stay sharp, independent, and free from corporate fluff. That means digging deeper, asking harder questions, and breaking down the world of GPU compute without a filter.

If you’ve found value in The GPU, consider upgrading to a paid subscription or supporting it below:

Buy Me A Coffeehttps://buymeacoffee.com/bbaldieri

It helps keep this newsletter unfiltered and worth reading.

Reply

or to participate.