Four arrests and the largest Series B in European history - AI infrastructure is a market of extremes this week.

At the top, continental record-breaking raises, multi-billion-dollar contracts, full-stack sovereign initiatives, and state-backed chess moves.

At the bottom, police raids, Bitcoin hangovers, legacy baggage dragging operators into the mud, and €100m of alleged tax evasion.

And plenty to write about.

I’m Ben Baldieri. Every week, I break down what’s moving in GPU compute, AI infrastructure, and the data centres that power it all.

Here’s what’s inside this week:

Let’s get into it.

The GPU Audio Companion Issue #66

Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.

Four Arrested in Raids at Northern Data

Northern Data is back in the spotlight. For all the wrong reasons.

German prosecutors confirmed raids at the company’s Frankfurt office this week. The cause? A Swedish investigation into legacy crypto mining operations and alleged VAT fraud estimated at over €100 million. Four arrests were made, with Swedish authorities saying the probe centres on misleading or incomplete tax filings by miners benefiting from incentives.

Northern Data insists this is a “misunderstanding”, and that its GPU cloud business and mining operations are completely separate, compliant, and above board.

While this may be true, the timing is messy.

The raids landed days after Northern Data reshuffled leadership: John Hoffman promoted to co-CEO, Scott Bailey hired as COO, Chandan Rajah brought in as CTO, and Charlotte Park elevated to Chief People Officer. It also comes in the wake of divestments of non-core units, a high-profile tie-up with Core42, and takeover rumours with Rumble that valued the firm at $1.17bn.

Why this matters:

  • Individually, those moves could be read as strategic repositioning, but together, they look like firefighting.

  • If Northern Data goes under, or takeover bids fall through, or high-profile partnerships implode, this could be the start of some aggressive consolidation.

  • Whatever’s actually going on, stability is not the word that comes to mind.

Fluidstack Gets More Google Backing

Cipher Mining has signed Fluidstack’s second Google-backed hosting agreement.

The 10-year, 168MW deal is worth $3 billion in contracted revenue. With extension options, it could balloon to 500MW and $7 billion. Google is stepping in to backstop $1.4 billion of Fluidstack’s lease obligations, providing lenders with confidence and securing warrants worth ~5.4% of Cipher equity. The site in Colorado City, Texas, will deliver critical IT load starting in September 2026.

Why this matters:

  • A 168MW contract needs an investment-grade credit rating to both fund and commit to. Fluidstack does not yet have this because they are a young company. Google, on the other hand, does.

  • As such, Google can use their clout to derisk Fluidstack’s lease obligations, allowing them to secure project financing. CIpher can then land Fluidstack as a tenant, which translates to ARR, boosting the share price.

  • Google’s concurrent equity stake in Cipher then effectively allows Google to derisk the Fluidstack backstop by allowing it to capture the value it unlocks via share price appreciation.

Nscale’s $1.1B Raise + Nokia Deal

Nscale just closed the largest Series B in European history.

The $1.1 billion round was led by Aker ASA, with participation from NVIDIA, Dell, Fidelity, Point72, G Squared, T.Capital, Blue Owl, and Nokia. Funds are being allocated to accelerate the rollout of its “AI factory” across Europe, North America, and the Middle East, with Stargate UK and Stargate Norway serving as flagship projects.

Customers include Microsoft and OpenAI, who are anchoring Nscale’s UK and Nordic deployments.

The Finnish IT and communications giant has joined as a preferred networking partner, bringing IP routing, optical networking, and data centre switching to Nscale’s global AI campus buildouts. The two will co-develop networking stacks for AI clusters and partner on joint deployment initiatives worldwide.

Why this matters:

  • $1.1B is unheard of in European funding rounds.

  • Nokia’s partnership strengthens sovereign positioning by keeping the hardware and software stack anchored in Europe.

  • Along with Nebius, Europe now has two meaningful counterweights to increasing US dominance of the AI infrastructure market.

Alibaba’s Omni-Modal Qwen3

Alibaba just dropped Qwen3-Omni.

The new “natively omni-modal” foundation model is built to handle text, images, audio, and video end-to-end, with streaming speech generation that hits sub-second latency (211ms for audio, 507ms for audio-video). Benchmarks show it outperforming Gemini-2.5-Pro and GPT-4o-Transcribe on speech recognition and instruction-following, as well as several other key benchmarks. It supports 119 languages for text, 19 for speech recognition, and 10 for speech generation. Throw in a 30-minute audio input context window, and you have a huge jump in long-form media understanding.

Why this matters:

  • Alibaba has fielded a direct competitor in the real-time multimodal space, tuned for latency-sensitive use cases.

  • With an open-source universal audio captioner, Qwen is pushing into spaces where OpenAI and Anthropic still hold back.

  • Qwen3-Omni runs natively on Alibaba Cloud, another example of model-cloud convergence in China, distinct from the hyperscaler + foundation model separation in the US.

HUMAIN Launches Qualcomm AI PC

Saudi Arabia’s HUMAIN has unveiled its first AI-native laptop, the Horizon Pro PC, at Qualcomm’s Snapdragon Summit.

Built on Snapdragon X Elite, it claims 100 times faster-than-human thought speeds, 18 hours of battery life, and zero-latency wake, paired with HUMAIN’s upcoming OS to unify enterprise workflows, communications, and AI apps. HUMAIN is pitching this as an agentic enterprise device: Arabic-first, privacy-prioritised, and able to operate locally on “ALLAM,” the Kingdom’s flagship LLM, while hybridising to cloud when needed.

Why this matters:

  • HUMAIN now spans from 10GW data centre ambitions to chips, models, and personal hardware.

  • By embedding an Arabic-first LLM into personal devices, HUMAIN has created its own distribution channel.

  • Regional competition is fierce, so this could be an edge in Saudi Arabia’s push for cultural and linguistic leadership in AI.

Alibaba Goes Full-Stack

At Apsara 2025, Alibaba Cloud staked its claim as a full-stack AI provider.

The company launched Qwen3-Max, a trillion-parameter LLM tuned for code and agent workflows, as well as new vision-language and omnimodal (see above) models, all of which are open-sourced to drive adoption. The model push came with enterprise agent platforms (Model Studio upgrades, AgentBay, Lingyang AgentOne) and fresh content tools under the Wan2.5 banner. On the infrastructure side, Alibaba is investing RMB 380B ($52B) to upgrade storage, networking (800Gbps), containers, databases, and security, specifically engineered for agentic AI.

Why this matters:

  • Alibaba is now offering models + agent platforms + infra in one stack.

  • Open-source keeps Qwen central to Asia’s developer base.

  • This move also provides Beijing with a software/services counterweight to Huawei’s hardware-led strategy. Expect further integration and alignment in the future.

China’s Stargate and 15th Five-Year Plan

On an island in Wuhu, rice paddies are giving way to data centres.

Officials are calling it the “Stargate of China”: a state-backed AI mega-cluster linking Huawei, China Telecom, Unicom and Mobile. With only ~15% of global AI compute versus the US’s 75%, China is consolidating idle chips from remote sites, wiring them into clusters via Huawei’s new UB-Mesh interconnect. Local subsidies cover up to 30% of AI chip costs, with Huawei and Cambricon scrambling to fill the Nvidia void under export bans.

Why this matters:

  • State backing at this scale, plus the broader context of Huawei’s roadmap, Alibaba’s open-source model push, and the involvement of every major Chinese telecom company, demonstrate the scale of that importance.

  • While this project might not match OpenAI’s $500bn Texas campus, it’s a strong signal that Beijing is moving to centralise and triage scarce compute for training and inference.

The Rundown

Stratification is now the defining feature of the neocloud sector.

On one end, you have relative upstart operators landing billion-dollar rounds, securing sovereign AI partnerships, and rolling out global campuses with heavyweight investors at their back.

In the middle, others are leveraging the same hyperscalers they initially sought to compete with to secure guarantees to make eye-wateringly large numbers work, with equity stakes as a bargaining chip.

And at the far end, companies once hailed as pioneers are being raided by law enforcement, trying to shake the weight of past decisions before being dragged down forever.

Winners are pulling away with capital, customers, and credibility. Survivors are improvising around balance sheets that increasingly don’t add up. And everyone else?

They’ll be left to the courts, the regulators, or the acquirers of last resort.

Whether they realise it yet or not.

See you next week.

Reply

or to participate