How do you build a cloud business to challenge your biggest customers without building a cloud business?

Sell others the kit they need to build it for you. Forment massive competition. Wait until the winners start to emerge from the fray. Then rent it back from a chosen few to ensure they survive the coming winter.

NVIDIA is competing with AWS, Azure, and GCP without competing.

And Lambda is their newest champion.

I’m Ben Baldieri. Every week, I break down what’s moving in GPU compute, AI infrastructure, and the data centres that power it all.

Here’s what’s inside this week:

Let’s get into it.

The GPU Audio Companion Issue #61

Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.

NVIDIA Rents 18,000 GPUs from Lambda

NVIDIA has signed a $1.5B deal to lease 18,000 GPUs from Lambda over four years.

Renting hardware from a customer might seem strange, but it’s not unprecedented. NVIDIA does the same thing with CoreWeave. Team Green rents 11,000 GPUs from CoreWeave’s EOS cluster under a similar arrangement for internal development use.

But the context today?

Very different.

Neoclouds are feeling the squeeze from high financing costs, falling rental rates, and mounting distressed debt. European players, such as Northern Data, are feeling the pressure. Smaller operators in the US are defaulting on lease payments. Technical tradeoffs made to boost marketing efforts are biting. Lambda, therefore, now has a vote of confidence from the very top and a cash-flow lifeline.

Then, there’s the competitive layer.

Hyperscalers like Amazon, Google, and Microsoft are all designing their own silicon, and increasingly view NVIDIA not just as a supplier, but as a rival in enterprise AI. By shoring up trusted neoclouds like Lambda and CoreWeave, NVIDIA gets to play the big three at their own game without meeting them head-on.

And if history is a guide, that matters.

The early internet buildout was comparable to the massive infrastructure deployments we see happening today. What followed then was an equally massive crash. With another winter looming, NVIDIA’s backing could be the difference between survival and collapse for the players it chooses to keep close.

Why this matters:

  • Long-term contracts like the one NVIDIA just handed out are like gold dust in a market where you’re lucky to sign anything longer than twelve months.

  • They’re not given to just anyone, so Lambda must be doing something right with their cluster design and deployment choices.

  • Just as with the internet buildout, the gap between infrastructure deployment and mass adoption will test balance sheets. NVIDIA’s chosen few are being positioned to outlast the cycle.

CoreWeave Acquires OpenPipe Amidst Core Scientific Pushback

CoreWeave has signed a definitive agreement to acquire OpenPipe.

The deal will give CoreWeave customers access to OpenPipe’s Agent Reinforcement Trainer and other tools to post-train agents on their own workflows. This follows the company’s acquisition of Weights & Biases earlier this year, deepening its move into developer tooling and model optimisation on top of its extensive GPU infrastructure.

At the same time, CoreWeave’s planned $9bn all-stock acquisition of Core Scientific has come under fire.

Two Seas Capital, Core Scientific’s largest investor, has filed a proxy statement urging shareholders to block the merger, calling the process “flawed” and the valuation a “take under.” Two Seas also highlighted CoreWeave’s heavy short interest and stock volatility as risks to Core Scientific shareholders.

Why this matters:

  • CoreWeave continues to position itself as more than a GPU provider by integrating yet more RL, fine-tuning, and tooling directly into its infrastructure.

  • Aggressive expansion both up and down the stack, however, is not without risk.

  • Investors questioning whether the Core Scientific deal undervalues the 1.3GW footprint could be the start of a trend, putting the brakes on further acquisitions.

Beyond.pl Lights Up Poland’s First AI Factory

Poland now has a sovereign AI supercomputer.

Beyond.pl has switched on the F.I.N., its NVIDIA-powered AI supercomputer built on DGX B200 SuperPOD RA with Quantum-2 InfiniBand networking and Pure Storage FlashBlade. The system anchors the company’s new AI Factory in Poznań, billed as the most powerful sovereign AI facility in Central and Eastern Europe. The campus runs on 100% renewables with a PUE of 1.2, is certified to the EU’s highest security standards, and delivers the full stack of services (GPUaaS, AIaaS, GPU colocation, and managed services), with NVIDIA AI Enterprise layered on top for good measure.

Why this matters:

  • Central and Eastern Europe now has its own certified AI hub, ensuring workloads remain in-region and compliant with EU regulations.

  • With B200s and NVIDIA AI Enterprise baked in, Beyond.pl cements its alignment with Team Green’s reference architecture.

  • By combining renewables, security, and sovereignty, Beyond.pl is pitching itself as the default AI infra partner for CEE governments, enterprises, and research institutions.

Anthropic Hits $183B Valuation with $13B Raise

Anthropic just closed a $13B Series F, valuing the company at $183B post-money.

ICONIQ, Fidelity, and Lightspeed led the round, with a host of other big names like BlackRock, Blackstone, T. Rowe, Qatar’s sovereign fund, and Ontario Teachers’ among the long list of backers. The raise follows one of the fastest revenue run-ups in tech history: from ~$1B run-rate at the start of 2025 to $5B by August. Anthropic now serves 300,000+ business customers, with large accounts (>$100k each) up nearly 7x YoY. Claude Code alone is on a $500M run-rate just three months after launch.

Why this matters:

  • Anthropic is pulling in Fortune 500 contracts at pace.

  • Those contracts go a long way in boosting investor confidence, which explains the massive names taking part in this round.

  • $13B is a huge round, buying GPUs, talent, and distribution leverage, cementing Claude as OpenAI’s only true peer.

Amkor’s $2B Arizona Advanced Packaging Facility

Amkor just confirmed it will build a $2bn advanced packaging and test facility on a 104-acre site in Peoria, Arizona.

Backed by $407m in CHIPS Act funding and anchored by Apple as its first customer, the site is slated to begin production in early 2028. The plant will handle high-performance packaging platforms like TSMC’s CoWoS and InFO, the tech behind Nvidia’s GPUs and Apple’s latest silicon. TSMC has already agreed to offload packaging from its Phoenix fabs to Amkor’s new facility, reducing the turnaround time currently spent shipping wafers back to Asia by weeks.

Why this matters:

  • The US is plugging the back end of the chip supply chain. Without domestic packaging, even US-made wafers rely on Taiwan and Korea to become finished chips.

  • Shortages of Nvidia’s H100 were worsened by limited packaging throughput. A US facility helps insulate supply at the highest end.

  • Given the CHIPS Act funding and Washington’s new role as Intel’s largest shareholder, one can’t help but wonder if the Federal Government will take a similar interest in the facility.

Google Bets on AI to Fix the AI Energy Squeeze

Google is bringing 29 startups into its new AI for Energy accelerator, split across North America and Europe.

The program offers mentorship, Google Cloud infrastructure, and access to AI tools, with a focus on cutting-edge projects, including grid optimisation and flexible interconnection, as well as AI-driven construction and weather forecasting. Startups like Mercury Computing (accelerating data centre interconnection), Lōd (risk-aware grid intelligence), Tibo Energy (unlocking grid capacity with AI), and 26 others are part of the first batch.

Why this matters:

  • Hyperscalers know AI won’t scale without new approaches to energy efficiency and grid flexibility. This accelerator aligns directly with the power bottleneck facing cloud and AI infra.

  • By embedding early with energy-tech startups, Google is positioning itself as the hyperscaler most directly tied to the energy transition.

  • Tools from these startups can reduce costs and risks for both utilities and data centres, creating a virtuous cycle that directly benefits Google’s cloud and AI customers.

Applied Digital Expands CoreWeave Lease to 400MW

Applied Digital has locked in another 150MW lease with CoreWeave at its Polaris Forge 1 campus in Ellendale, North Dakota.

That brings the total CoreWeave footprint on-site to 400MW, spread across three long-term leases worth ~$11B over ~15 years. The first 100MW facility is due online in Q4 2025, with the 150MW second build scheduled for mid-2026. The new 150MW addition is set to be live in 2027. Long-term, the Polaris Forge 1 campus is designed to scale to 1GW of IT load.

Why this matters:

  • With $11B in contracted revenue, this is one of the largest single-tenant campus arrangements in the AI infra market.

  • Few sites globally are engineered for 1GW of AI-dedicated capacity; Ellendale positions North Dakota as a genuine AI hub.

  • Long-term leases provide predictable returns for Applied Digital, while letting CoreWeave secure capacity in a market where lead times are stretching out years.

The Rundown

The AI infrastructure market is creaking.

Neoclouds and hyperscalers alike have deployed hundreds of billions of dollars’ worth of NVIDIA hardware globally. Many were launched around 2023. Just as the AI hype machine was spinning up. Back then, coders were going to be obsolete by mid-2025. AGI was just around the corner. Enterprises were all in, and all we needed was more compute.

But the reality?

The meaningful demand that’s needed to keep all one hundred and fifty plus undifferentiated neocloud commodity producers afloat does not exist.

That’s a problem for NVIDIA.

Team Green needs the best of these companies to survive. Why? The hyperscalers are furiously developing their own hardware to reduce their dependence on a company they increasingly view as a competitor. AMD are playing the same game they did with Intel in the CPU market. And inference-specific accelerators are winning the kinds of contracts that neoclouds can only dream of.

So how do you ensure survival?

Incubate the winners.

That means capital, contracts, and strategic partnerships that extend runway through the coming downturn. Four-year deals like the one just handed to Lambda are exactly that: a shield against falling rental rates, a stabiliser against distressed debt, and a way to ensure NVIDIA’s champions don’t just make it through the winter, but come out of it stronger, positioned as credible counterweights to the hyperscalers’ in-house silicon.

Sometimes, the best way to compete isn’t to compete at all.

See you next week.

Reply

or to participate