It finally happened.
After months of will they, won’t they, Rumble is buying Northern Data. It’s a nail in the coffin for Europe’s truly sovereign AI ambitions, a massive foothold for the freedom-focused US video platform, and a way back into the EU for Tether after the USDT ban.
I’m Ben Baldieri. Every week, I break down what’s moving in GPU compute, AI infrastructure, and the data centres that power it all.
Here’s what’s inside this week:
Let’s get into it.
The GPU Audio Companion Issue #
Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.
Rumble to Buy Northern Data for $5B
Just a few short months after the initial story broke, Rumble is buying Northern Data.
The deal will hand the video platform control of 22,000 Nvidia GPUs, including 20,000 H100s, and a global network of data centres across the US and Europe. Backed by Tether, which has pledged to become an anchor customer, the deal folds Northern Data’s AI cloud and data centre businesses into Rumble Cloud. Northern Data will delist after completion, giving Rumble an instant foothold in high-density compute and positioning it to scale beyond media into full-stack AI services.
What this means for the previously announced Gcore and Core42 partnerships remains to be seen.
Why this matters:
Rumble’s pivot from media to infrastructure marks one of the boldest crossovers yet between the content and compute worlds.
The deal creates a new sovereign-aligned AI cloud, backed by crypto liquidity and positioned against Big Tech dominance.
With Tether funding and Northern Data hardware, Rumble gains both the assets and ideology to challenge hyperscalers on freedom, privacy, and ownership of AI infrastructure.
AMD Acquires MK1 to Boost AI Inference Stack
AMD’s software continues to mature through acquisitions.
The company has acquired MK1, an AI startup specialising in high-speed inference and reasoning technologies. Per the announcement, MK1’s Flywheel inference engine is optimised for AMD hardware, making it easier for customers to unlock more value from their deployments. The software will now integrate into AMD’s Artificial Intelligence Group, bolstering the firm’s enterprise AI software capabilities and inference performance.
Why this matters:
AMD’s GPUs have become increasingly competitive with NVIDIA’s offerings from a raw hardware capability standpoint in the past 12 months. Its software stack, on the other hand, has lagged behind.
Following on from public challenges from the likes of SemiAnalysis, improving software capabilities to deliver a “ready out of the box” experience has been the focus.
AMD says this acquisition strengthens its “AI performance and efficiency across the stack,” effectively giving it a proprietary inference engine while concurrently unlocking full traceability in the push towards enterprise-grade compliance.
Corvex Goes Public, Firmus Raises A$500M
Two major neocloud moves this week, one on Wall Street and one in Australia.
First up, Arlington-based Corvex is going public via an all-stock merger with Movano (Nasdaq: MOVE).
Founded by ex-Google engineer Seth Demsey and Jay Crystal, Corvex builds secure GPU clouds for AI workloads that need compliance-grade infrastructure. Think infrastructure specifically for defence, government, finance customers. It runs Confidential Computing environments, not just compute leases, and calls its stack the Amplified AI Cloud.
The deal gives it $40 million in fresh capital and access to a $1 billion equity line from Chardan, with Corvex shareholders keeping 96 percent of the combined company after the merger.
Next, down under, Firmus raised AU$500 million to expand its Project Southgate rollout.
For the uninitiated, Project Southgate is a 1.6 GW national network of AI Factories built with CDC Data Centres and NVIDIA.
The funds will see new sites coming to Tasmania, Melbourne, Canberra, Sydney, and Perth. One key point regarding Firmus’ overall approach is a focus on local manufacturing and energy integration. This kind of integrated approach means Firmus is effectively building AI infrastructure as an industry, not just a service.
Why this matters:
Corvex’s RTO marks the latest private AI cloud to tap public markets. That means deeper potential liquidity while giving investors access to the sovereign-secure compute theme.
Firmus’s expansion shows how AI infrastructure is continues to become a matter of national policy, blending industrial strategy, energy independence, and AI capability.
Together, they exemplify the ongoing global trend: AI infrastructure is going sovereign, capital-intensive, and vertically integrated, from Virginia’s secure clouds to Tasmania’s green factories.
Google Turns NotebookLM Into a Full AI Researcher
Google is turning NotebookLM into a full-fledged AI research assistant.
A new “Deep Research” mode now runs multi-step web searches, builds structured reports, and links every claim to its source, effectively turning the tool into an on-demand analyst. Users can choose Fast or Deep modes, while expanded support for Sheets, PDFs, Word docs, and Drive files lets NotebookLM summarise and cross-reference mixed data inside one workspace.
Why it matters:
Deep Research makes NotebookLM a serious entry into the AI productivity race, bringing autonomous web reasoning directly into user workflows.
File-type expansion shows Google’s intent to own the full research lifecycle, from ingesting raw data to generating briefings and presentations.
It reinforces Google’s pivot toward embedded AI tools, where users don’t leave the workspace; AI comes to them.
US Chip Curbs Bite as China Scrambles for Compute
China’s AI sector is feeling the strain of Washington’s export controls.
Severe GPU shortages have pushed Beijing to intervene directly in how domestic chip output is allocated, prioritising Huawei and other national champions reliant on SMIC fabrication. AI firms like DeepSeek have already delayed model releases due to the crunch. Others are resorting to chip bundling, combining thousands of lower-grade processors into massive, power-hungry systems that strain grid capacity. Local governments are now subsidising electricity bills just to keep these setups running. Despite the workarounds, China’s advanced chipmaking remains years behind.
Even so, Huawei’s Ascend 910C production is accelerating, and some US officials now privately concede that limited exports of older chips may resume within two years.
Given that NVIDIA’s lobbying spend has surged to $3.5 million this year, that quiet admission shouldn’t come as a suprise. Especially considering Team Green has a lot to lose if it can’t maintain access to arguably its largest market outside of the US.
Why it matters:
China’s chip bottleneck shows US export controls are working, for now.
Beijing’s intervention in semiconductor allocation signals a state-led approach to triage in the AI race.
The longer the gap persists, the greater the global split between US-aligned compute supply and China’s homegrown alternatives, a rift that could reshape AI infrastructure for the next decade.
Yann LeCun to Exit Meta Amid Deepening AI Schism
Yann LeCun, Meta’s chief AI scientist and one of the architects of modern AI, is reportedly leaving to start his own company.
His exit caps months of tension with Mark Zuckerberg’s pivot toward AGI-scale systems and rapid product rollouts. LeCun has long dismissed large language models as incapable of real reasoning, a stance increasingly out of step with Meta’s direction. After the $240B market hit from Meta’s AI spending spree and a string of high-profile exits, this departure looks less like a reshuffle and more like a reset.
Why this matters:
LeCun’s FAIR lab defined Meta’s research identity; its erosion signals the company’s full shift from science to speed.
Meta’s nine-figure hires and “superintelligence” focus suggest it’s now chasing OpenAI, not charting its own path.
The intellectual backbone of Meta’s AI division is walking out — and that loss can’t be replaced with payroll.
Fluidstack Lands $50B Anthropic Data Centre Deal
Anthropic is building its first data centres. And they’re doing it with Fluidstack.
The $50 billion deal will see custom AI data centres built in Texas and New York, extending through 2026. The facilities are designed for Claude’s frontier workloads: dense, low-latency environments built for scale. The project will create 3,000 jobs, and align with the Trump administration’s AI Action Plan to keep compute onshore.
For Anthropic, it’s vertical control.
For Fluidstack, it’s proof of becoming one of the go-to builders for frontier AI.
Why this matters:
The real story here is what’s not said: no GPU counts in the press releases.
Anthropic recently signed deals to use Google’s TPUs. Google is also backstopping Fluidstack’s leases with TeraWulf and Cipher Mining.
With these factors in mind, the neocloud could be quietly managing TPU infrastructure alongside the usual suspects, marking the first time a NeoCloud operator builds for both frontier labs and hyperscalers.
Click here to read the full announcement from Anthropic and the concurrent announcement from Fluidstack.
The Rundown
Spare a moment for what could have been.
Northern Data once billed itself as Europe’s sovereign answer to the hyperscalers, promising green GPU infrastructure at scale and a homegrown AI cloud to rival the U.S. giants. Instead, it became a cautionary tale.
Executive arrests in Germany, VAT investigations, missed filings, plummeting H100 rental rates and high costs of capital left the company fighting on every front. Partnerships meant to save it fizzled into silence. Even as it tried to reposition Taiga Cloud and Ardent Data Centers, Rumble’s $5B acquisition shows us, definitively, that it wasn’t enough.
22,000 NVIDIA GPUs, four data centres, and a European footprint that once symbolised independence live on.
But the mission, a sovereign, European AI cloud, doesn’t.
See you next week.
Everything Else
More honourable mentions for this week below:

