• The GPU
  • Posts
  • Issue #25: TSMC & Intel JVs, A New NCP, & The Second DeepSeek Moment

Issue #25: TSMC & Intel JVs, A New NCP, & The Second DeepSeek Moment

Feat. TSMC, Intel, Nvidia, Broadcom, CoreWeave, OpenAI, Manus, and Microsoft

We could be looking at the comeback of the century.

TSMC and Intel are reportedly in talks to form a joint foundry venture. If this goes through, it would bring Intel deeper into the AI chip manufacturing race while giving TSMC a strategic foothold in the US.

But wait! There’s more!

Microsoft’s relationship with OpenAI is looking rockier than ever, a new AI agent system called Manus is making waves, and AI-driven data centre investment hit a record $57 billion last year. Plus, CoreWeave did more things.

I’m Ben Baldieri, and every week I break down the moves shaping GPU compute, AI infrastructure, and the data centres that power it all.

Here’s what’s inside this week:

Let’s get into it.

The GPU Audio Companion Issue #25

Want the GPU breakdown without the reading? The Audio Companion does it for you—but only if you’re subscribed. Fix that here.

TSMC & Intel’s Foundry Joint Venture

TSMC and Intel are reportedly still in talks to form a joint foundry venture that could also bring AMD, Broadcom, and Nvidia into the fold.

The idea? A shared manufacturing effort that helps Intel catch up in advanced nodes while allowing TSMC to expand its presence in the US. For Intel, this would be a massive win. It has struggled to keep pace with market wants in recent years, and a joint foundry operation would allow it to tap into TSMC’s expertise while leveraging its own US-based infrastructure. Nvidia, AMD, and Broadcom are all reportedly in discussions to participate, which suggests they see value in a more diversified foundry landscape.

For TSMC, the motivation is clear:

Diversify production outside Taiwan and get deeper into the US market while securing long-term commitments from AI chip heavyweights.

Why this matters:

  • Nvidia and AMD rely heavily on TSMC, but a US-based foundry with Intel could change the balance of power.

  • With growing geopolitical tensions, diversifying chip production outside Taiwan is critical.

  • Such broad industry participation suggests just how much value there could be in a more diversified foundry landscape.

Read the full story here.

CoreWeave Signs OpenAI

CoreWeave just landed a big one.

Hot on the heels of the IPO announcement, the AI cloud provider has signed a deal with OpenAI to deliver dedicated infrastructure for training and inference workloads. For CoreWeave, this is a major validation of its GPUaaS business model. It has positioned itself as the go-to provider for high-performance AI workloads, and with OpenAI on board, it’s cementing its place in the AI infrastructure ecosystem.

Why this matters:

  • While OpenAI has been heavily reliant on Microsoft’s Azure cloud, this deal suggests it wants to diversify.

  • CoreWeave keeps stacking big-name customers with OpenAI joining a growing list that includes Inflection AI, Mistral, and Stability AI.

  • If OpenAI is diversifying beyond Azure, others might follow.

Read the full story here.

Together AI Joins Nvidia’s Cloud Partner Programme

Together AI is now officially an Nvidia Cloud Partner.

The startup has positioned itself as a cost-effective alternative to the big cloud players, focusing on AI infrastructure that’s optimised for training open-source models. With this new designation, it’s now in a stronger position to compete with other neoclouds like CoreWeave, Lambda, and Crusoe. That means greater access to the latest and greatest hardware on the one hand and more market competition on the other.

Coupled with the fact that Together AI is now also offering the full Nvidia AI Enterprise (NVAIE) suite as well, that’s good for everyone.

Why this matters:

  • NCP status means Together AI gets priority access to cutting-edge hardware.

  • The NVAIE suite unlocks a huge amount of AIaaS capability beyond mere compute - think Nvidia GPU-optimised agent blueprints and pre-trained models.

  • Blackwell priority plus an enterprise-grade AI toolset should give you an indication of the direction of travel.

Read the full story here.

Manus AI and the Second DeepSeek Moment in as Many Months

A new AI agent player has entered the chat.

Manus AI, developed by China’s Butterfly Effect, has been making waves, with industry heavyweights like Jack Dorsey and Hugging Face’s Victor Mustar singing its praises. Those lucky enough to get access describe it as a hyper-intelligent, if occasionally lazy, intern that’s capable of breaking down complex tasks, autonomously browsing the web, and refining its responses based on feedback. It’s even being called “the second DeepSeek”.

But…almost no one has actually used it. Access is restricted, crashes are frequent, and server overloads are common.

Why this matters:

  • Agentic AI tools like Manus are the first real indication of just how capable assistants may one day become.

  • This is another indication that China’s AI industry isn’t just catching up but leading in certain areas, and the trend delivers, it could shift enterprise adoption in nontraditional markets away from Western offerings.

  • Just like Deep Research and Hugging Face, developers have already been hard at work getting Manus to build open-source versions of itself, such as the incredibly named ANUS, further proving the real battle is open vs closed.

Read the full story here.

Meta’s AI Training Chip Enters Testing

Meta has started testing its first in-house AI training chip.

The chip, known as the MTIA (Meta Training & Inference Accelerator), is designed specifically for AI workloads. While it’s unlikely to rival Nvidia’s GB200 or AMD’s MI325X anytime soon, this is part of a broader push to reduce reliance on what Meta likely sees as potential competitors in the future. And with Amazon, Google, and Microsoft all pursuing custom silicon as well, it signals that Meta is serious about controlling its AI infrastructure stack.

Why this matters:

  • Microsoft, Amazon, Google, and now Meta are all developing their own AI chips - good for Broadcom, less good for Nvidia.

  • Training Llama models isn’t cheap, and in-house chips could help cut costs.

  • If this initial small deployment of the chip goes well, Meta plans to ramp up production for wide-scale use.

Read the full story here.

Microsoft & OpenAI’s Relationship Gets Messier

Microsoft CEO Satya Nadella just made it clear: OpenAI is a product company, not a model company.

The message? Microsoft sees OpenAI as a means to an end, not the sole driver of its AI ambitions. This is a shift from the early days of their partnership, where OpenAI was positioned as Microsoft’s generative AI powerhouse. But with Microsoft pushing its own AI models, integrating competitors like Mistral AI and even DeepSeek (which sama wants banned), and expanding its cloud partnerships, it’s clear the company isn’t betting everything on OpenAI anymore.

Why this matters:

  • OpenAI is losing its privileged position in Microsoft’s stack.

  • Competitor integration means OpenAI loses one of the most important aspects of any GTM strategy - distribution.

  • Where the ROI is going to come from for the massive digital infrastructure buildout still isn’t clear, and Satya’s patience is clearly starting to wear thin.

Read the full story here.

AI Data Centre Investment Hits Record $57B in 2024

AI is reshaping the entire data centre market.

New figures show that AI infrastructure investment hit a record $57 billion in 2024, nearly double the previous year. Demand is so high that some providers are locking in grid capacity years in advance, and new data centre projects are seeing unprecedented interest from private equity, sovereign wealth funds, and traditional infrastructure investors.

Hyperscalers, neoclouds, and enterprises alike are scrambling to secure GPU capacity, and the land grab for power and real estate is only getting fiercer.

Why this matters:

  • AI is driving the biggest data centre expansion in history, even though it’s still a relatively small part of the market (25%).

  • The hyperscalers aren’t the only players anymore.

  • Neoclouds and enterprise AI firms are also building their own infrastructure, $57B is likely just the beginning. Expect even bigger numbers in 2025.

Read the full story here.

The Rundown

And there I was, worrying I wouldn’t have enough to write about this week in the wake of DCW in London.

Clearly, those fears were misplaced, and it’s been another big week for changes in market dynamics.

TSMC and Intel are in talks for a foundry collaboration, a move that could radically reshape the global semiconductor landscape. Couple this with Microsoft distancing itself from OpenAI, and Meta putting its first in-house AI training chips into testing, and you have the first indications of a significant direction change. And with Satya calling OpenAI a product company while sama wants his Chinese competitors banned instead of just “building a better product”, it seems like the tide is starting to go out.

Who’s been swimming naked?

If I knew the answer to that, I’d be a very rich man.

See you next week.

p.s. I’m at GTC next week (for the first time). If you want to meet up, reply to this email or grab me on the show floor!

Keep The GPU Sharp and Independent

Good analysis isn’t free. And bad analysis? That’s usually paid for.

I want The GPU to stay sharp, independent, and free from corporate fluff. That means digging deeper, asking harder questions, and breaking down the world of GPU compute without a filter.

If you’ve found value in The GPU, consider upgrading to a paid subscription or supporting it below:

Buy Me A Coffeehttps://buymeacoffee.com/bbaldieri

It helps keep this newsletter unfiltered and worth reading.

Reply

or to participate.