The US government just told its diplomats to kill the market its own companies are building for.

This week, Reuters published a leaked State Department cable directing US diplomats to fight data sovereignty laws worldwide. The same week, three major AI infrastructure launches. Each one purpose-built for the exact demand Washington is trying to suppress.

The mask is off.

And the market is moving in the opposite direction.

I'm Ben Baldieri, and every week I break down the moves shaping GPU compute, AI infrastructure, and the data centres that power it all.

Here's what's inside this week:

Let’s get into it.

The GPU Audio Companion Issue #93

Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.

The GPUs First Media Partnership

I'm heading to New York March 23–24 for The Xcelerated Compute Show.

This is The GPU's first media partnership. This still feels a bit surreal, given that I started writing it from my bedroom in Streatham Hill. But the lineup speaks for itself: chipmakers, neocloud operators, hyperscalers, and infrastructure buyers who rarely end up in the same room.

The best conversations in this space happen off-stage, and this event is built for exactly that.

That’s why I’ve got a present for you.

Click the button below, use the code GPU26 for a free pass, and come find me on the floor.

US Orders Diplomats to Kill Data Sovereignty Laws

Washington is lobbying against the same sovereign AI demand that its own companies are racing to supply.

A State Department cable dated February 18 and signed by Secretary of State Marco Rubio, as reported by Reuters, directed US diplomats to "counter unnecessarily burdensome regulations, such as data localisation mandates," calling them a threat to AI and cloud services. The cable explicitly named GDPR as an example, tasked diplomats with tracking proposals to restrict cross-border data flows. It also accused China of "bundling enticing technology infrastructure projects with restrictive data policies" to expand surveillance.

Why this matters:

  • American infrastructure now comes bundled with American jurisdiction, FISA Section 702 exposure, and explicit diplomatic pressure against local data protections.

  • The China framing cuts both ways. Rubio accuses Beijing of bundling infrastructure with restrictive data policies to expand surveillance. That's the same critique sovereign buyers make about US cloud providers.

  • For any CTO evaluating whether to build on US hyperscaler infrastructure or invest in sovereign alternatives, this cable just became Exhibit A in the risk assessment.

Brookfield’s Radiant Acquires Ori

Brookfield just turned a seven-year-old UK neocloud into the front-end of a $100 billion AI infrastructure programme.

Radiant, Brookfield's portfolio company within its AI Infrastructure Fund (BAIIF), has merged with Ori Industries. The result? One of the world's first purpose-built integrated AI compute companies, deploying Blackwell, GB200 NVL72, and Rubin hardware on NVIDIA's DSX reference design. Radiant gets Ori's operational cloud across 20+ data centres, proprietary software stack, and GPUaaS business. Ori gets access to BAIIF's $100 billion deployment pipeline. Ori's Global AI Cloud continues operating for on-demand customers.

Why this matters:

  • Brookfield is now fully vertically integrated: powered land, long-term capital, compute hardware, and operational software. No other private capital player has assembled all four.

  • The two-tier structure, Radiant for long-term sovereign contracts, Ori for on-demand GPUaaS, is a hedge against whether AI compute consolidates around reserved capacity or stays fragmented. It's also a funnel: on-demand users who scale become long-term contract candidates.

  • We first covered Ori in Issue #19 when Saudi Aramco's Wa'ed Ventures invested to expand into Riyadh. Under Brookfield, it goes from mid-tier UK neocloud to the operational layer of a $100 billion programme. That's the consolidation pattern from Issue #91's rundown playing out in real time — more names entering the neocloud market, but the biggest capital allocators are absorbing the operators who built the early infrastructure.

Sharon AI, Cisco Launch Australia's First Secure AI Factory

Sharon AI is turning last month's Cisco partnership into hardware.

The recently NASDAQ-listed Australian neocloud (SHAZ) and Cisco have launched Australia's first Cisco Secure AI Factory. The project is built with NVIDIA, powered by 1,024 Blackwell Ultra GPUs, Cisco UCS servers, Nexus Hyperfabric networking, and VAST Data storage, all hosted in NEXTDC's Australian data centres. The deployment keeps all data and AI processing within Australia, aligning with the country's new National AI Plan. Sharon AI will offer bare metal, managed AI services, inference, Kubernetes, fine-tuning, and a sandbox environment for enterprise proofs-of-concept.

Why this matters:

  • This is the execution on the $200M Digital Alpha/Cisco partnership and APAC pivot we covered in Issue #86, when Sharon AI divested its Texas data centre stake to go all-in on Australian sovereign compute. Six weeks later, they've gone public and stood up a full-stack Cisco AI Factory.

  • Cisco's involvement makes this an enterprise play, not just a GPU leasing operation. The Secure AI Factory branding wraps networking, security, and observability around the compute layer, exactly what regulated verticals require before they'll move AI workloads off their own servers.

  • Sharon AI is positioning itself as a sovereign full-stack alternative in a market where the only other options are US hyperscalers subject to American jurisdiction and diplomatic pressure that’s no longer theoretical.

Anthropic Accuses Chinese Labs of Stealing Claude's Outputs

The campaigns allegedly routed through commercial proxy services to bypass regional access restrictions on China, distributing traffic across "hydra cluster" architectures of fraudulent accounts. Anthropic has framed this as a national security issue, arguing that distilled models strip safety guardrails and that the attacks undermine US export controls by making Chinese progress appear more independent than it is. A bold stance from a company whose own data practices are under active litigation.

Per the January ‘26 Washington Post article, Anthropic settled an authors' lawsuit for $1.5 billion last year after Project Panama, an internal initiative to purchase, shred, and scan millions of books for training data, was made public through court filings. Internal documents stated "we don't want it to be known that we are working on this." Reddit has separately sued Anthropic for scraping its platform over 100,000 times after claiming to have stopped, ignoring robots.txt files, and refusing to engage on licensing. The Chinese labs were paying for API access through proxies. Anthropic took Reddit's data without paying at all.

Why this matters:

  • Closed architectures concentrate capability behind API walls. That concentration is what makes distillation attractive.

  • Anthropic's report reads as an indictment of Chinese labs, but it's also an indictment of the closed model structure itself.

  • If your competitive advantage runs on someone else's model behind someone else's API, it can potentially be extracted by anyone willing to pay for access.

Meta Signs 6GW AMD Deal

The deal goes beyond chip procurement. Meta and AMD have agreed to align product roadmaps across silicon, systems, and software. Initial deployments will use the Helios rack-scale architecture that Meta and AMD co-developed for last year's Open Compute Project Summit. Meta has positioned the agreement under its broader Meta Compute initiative, which combines external GPU sourcing from multiple vendors along with its own MTIA silicon programme.

Why this matters:

  • Zuckerberg framed this as compute diversification: avoiding single-vendor dependency as Meta scales toward “personal superintelligence”.

  • 6GW is an extraordinary number. For context, the entire UK grid peaks at around 35-40GW. Meta is committing enough AMD compute capacity alone to rival the power demand of a mid-sized country. On top of the NVIDIA commitment.

  • AMD needed this. The Goldman Sachs-Crusoe financing deal (Issue #91) showed AMD backstopping its own chips with balance-sheet guarantees to get them deployed. A 6GW Meta commitment is a different kind of validation: a top-three hyperscaler choosing AMD not as a fallback but as a co-design partner with roadmap alignment. That's the signal the market has been waiting for since MI300X launched.

Microsoft Ships Disconnected Sovereign Cloud

The fully disconnected sovereign cloud just went from concept to product.

Microsoft has launched three capabilities that let organisations run Azure infrastructure, Microsoft 365 productivity tools, and large AI models entirely on-premises with no cloud connectivity whatsoever. Azure Local disconnected operations handle governance and policy enforcement locally. Microsoft 365 Local brings core business software inside the customer's sovereign boundary. Foundry Local now supports large multimodal models running on NVIDIA GPUs with local inferencing and APIs, all within customer-controlled data boundaries. The stack spans connected, intermittently connected, and fully disconnected modes, letting customers choose the control posture per workload without fragmenting their architecture.

Why this matters:

  • Every Sovereign AI story we’ve covered centred on the same buyer requirement: frontier AI capability without data leaving sovereign boundaries. Microsoft just productised that requirement as a single deployable stack.

  • For neoclouds pitching sovereign compute, Microsoft's entry into the disconnected market validates the category but raises the competitive bar. Selling sovereignty against a hyperscaler that can't operate disconnected is one pitch. Selling it against a hyperscaler that ships Exchange, SharePoint, Azure governance, and large-model inference in a single air-gapped deployment is a different conversation entirely.

  • The neoclouds that survive this will be the ones offering something Microsoft can't bundle: regional relationships, local power assets, or specialised hardware configurations that don't fit the Azure Local form factor.

Space Data Centres: “Ridiculous,” “AI Snake Oil,” and “Peak Insanity”

OpenAI CEO Sam Altman called the idea "ridiculous … with the current landscape." Short seller Jim Chanos labelled it "AI Snake Oil from the Silicon Valley promoter class." Gartner VP analyst Bill Ray went with "peak insanity," publishing a report titled Orbital Datacenters Won't Serve Terrestrial Needs, so Focus on Earth.

Gartner's case leaned hard into two things very difficult to argue with:

Math and physics.

Space-grade solar panels cost 1,000x terrestrial equivalents. Orbital temperatures swing between 100-400 kelvin (well beyond anything recorded on Earth). Cooling relies on ammonia piping like the ISS, and laser downlinks fail through cloud cover. Ray's conclusion: orbital compute only makes sense for data produced and consumed in space.

Why this matters:

  • The infrastructure bottleneck driving orbital data centre speculation is real — power constraints, grid congestion, and permitting delays are choking terrestrial build-out in every major market. The core pushback here is that the orbital pitch is a fundraising narrative attached to a genuine problem.

  • SpaceX's proposal of a million satellites and Google's targeting of 2027 deployment attracted serious investor attention. This week's pile-on may mark the point at which that capital starts looking for the exit, or at least stops writing new cheques.

  • None of this means orbital compute is permanently dead. Bezos gave it 20 years. Altman said not this decade. The gap between vision and viable engineering is real but so was the gap for reusable rockets in 2005.

The Rundown

Sovereignty stopped being a policy debate this week. It became a market structure.

Washington told its diplomats to fight data localisation. Microsoft, Brookfield, and Sharon AI shipped products built for it. That split is going to define the next phase of AI infrastructure investment.

Washington sees sovereign compute as market fragmentation.

Buyers see it as derisking against the jurisdictional overreach that cable represents.

Meta's 6GW AMD deal is a response to the same risk, albeit from a different source. Single-vendor dependency creates risk. Reducing that exposure mitigates the risk, much as nations are reducing single-jurisdictional dependency.

So, what does this all mean?

Control is migrating downward.

Away from platforms, toward infrastructure owners. Away from API providers, toward weight holders. Away from diplomatic leverage, toward physical assets: power, land, racks, and the legal boundaries they sit inside.

The cable made the implicit explicit.

Sovereign AI infrastructure isn't emerging despite American opposition.

It's accelerating because of it.

See you next week.

Reply

Avatar

or to participate

Keep Reading