Some chase model performance. Others sell the GPUs that train them.
But none of it runs, at scale, under load, with real SLAs, without the network.
Because models don’t work unless they move.
And movement is the first thing to break when your infra isn’t ready.
Networking - boring, high-capex, often overlooked - is suddenly at the heart of the AI race. The switches, routers, telemetry pipelines, and software-defined fabrics that move hundreds of terabytes per second. Not just between racks, but between regions, clouds, and sovereign jurisdictions.
And that’s where this company comes in.
They’ve been building this stuff for decades.
The fibre, the protocols, the control planes. Quietly powering the backbones of telcos, the early cloud, and now, the AI factories emerging across Europe, Asia, and North America.
They’ve shipped infrastructure with custom, and Broadcom, silicon that powers AI workloads across Tier 1 clouds and enterprises.
And while other vendors scramble to retrofit their systems for GPU-era traffic, they’re already on their third generation of AI-ready architectures.
Who are they?
The GPU Audio Companion Issue #47
Want the GPU breakdown without the reading? The Audio Companion does it for you—but only if you’re subscribed. If you can’t see it below, fix that here.
Company Background
Juniper isn’t a startup.
Founded in 1996, the company made its name building carrier-class routers during the first internet boom. They scaled telco backbones. Supported the early hyperscalers. And laid the foundations for MPLS, SDN, and eventually the software-defined overlay networks that supported the cloud buildout. More recently, they’re market share in enterprise networking has taken off ever since their acquisition of Mist Systems back in 2019. Mist brought AIOps capabilities that were ahead of its time and Juniper has been “mistifying” its portfolio ever since. And now it’s a $5.5 billion public company with over 11,000 employees and nearly three decades of network engineering behind it.
But the cloud was just a warm-up.
AI broke everything.
Traffic patterns inverted. AI training flooded networks with east-west traffic. Racks talking to racks, and nodes talking to nodes, non-stop.
Telemetry moved from “nice to have” to mission-critical, and lossless switching became a requirement, not an optimisation.
AI has meant hyperscaler-style buildouts are no longer just for the Googles and Amazons of the world. Every government, defence agency, and enterprise with sovereign ambitions started building their own mini-AI factory.
And Juniper didn’t react.
They anticipated.
Instead of slapping “AI-ready” on a chassis, they split the challenge in two:
Networks for AI: High-performance, loss-aware, silicon-optimised fabrics for AI-scale throughput.
AI for Networks: Intent-based automation, real-time analytics, and predictive failure detection for human-absent ops.
That dual-stack approach, infra on the bottom and AI-ops on top, is now their competitive edge.
Networks for AI
On the hardware side, Juniper’s infrastructure now powers some of the largest AI-native data centres outside the US hyperscaler ecosystem.
Their PTX and QFX lines ship with 400G and 800G switch fabrics, built for large-scale training and inference clusters. That includes support for spine-leaf and folded-Clos architectures, with minimal packet loss and full observability across layers.
Their Express custom silicon series, designed in-house, already supports 1.6T and 3.2T next-gen switching, with a roadmap to terabit-scale links and liquid-cooled infrastructure. PTX uses Express. QFX uses Broadcom silicon.
And most importantly?
No forklift upgrade needed. The platforms are built to scale without physical replacement, keeping long-term capex lower for high-growth customers.
In sovereign deployments across Europe and Asia, Juniper’s switches now handle inter-node AI traffic for critical public infrastructure. In Tier 1 hyperscaler environments, they serve as the transport layer behind GPU clusters (though many customer names remain confidential under NDA).
Because throughput isn’t the only concern.
Uptime, predictability, and pressure-tested reliability are all essential - something most newer entrants haven’t had time to prove.
AI for Networks
But raw hardware doesn’t get you far when failure domains are measured in seconds and GPU time costs $20/hour.
That’s where Juniper’s software stack kicks in.
Apstra Data Center Director, its intent-based fabric controller, lets operators define the end state of their networks - “what I want it to do” - rather than every manual setting along the way.
Mist and Marvis, Juniper’s AIOps platform and engine respectively, ingest real-time telemetry and automate fault prediction, anomaly detection, and root cause analysis. It’s one of the few platforms that genuinely self-heals under load. Closing feedback loops across switch fabrics, edge devices, and WAN gateways. That combination, along with Marvis AI Assistant, makes Juniper’s network stack one of the most programmable, monitorable, and autonomous systems in the market.
And in GPU environments, where even milliseconds matter, that kind of predictability is gold.
But there’s nuance here.
Telemetry is only as good as the silicon that produces it and the management software that structures and analyses it. Juniper hedges its bets by offering in-house developed ASICs on some of its hardware platforms and merchant silicon from Broadcom on others. And on top, the JunOS operating system, Apstra Data Center Director, and Data Center Assurance run everything, even switching hardware from Cisco, Dell, and Arista. Juniper makes its biggest bet on openness – why wouldn’t they? They compete against 800 pound heavyweights like Cisco and Nvidia.
It’s a bet not every buyer agrees with.
But for regulated, high-value environments like sovereign AI deployments, it’s finding traction.
Executive Team
Rami Rahim (Chief Executive Officer) - Started at Juniper in 1997 as an ASIC engineer. Employee #32.
Manoj Leelanivas (EVP, Chief Operating Officer) - Juniper long-termer, previously holding several EVP and GM roles.
Raj Yavatkar (SVP, Chief Technology Officer) - Formerly, Google, VMware, Intel. IEEE Fellow, PhD in Computer Science.
Sharon Mandell (SVP, Chief Information Officer) - Formerly, TIBCO, Harmonic, Knight Ridder, Tribune Company
David Cheriton (Chief Data Scientist) - Co-founder of Apstra (acquired by Juniper), Arista, Kealia (acquired by Sun Microsystems), Granite Systems (acquired by Cisco)
The Edge
High-Performance Networking: From data centre cores to cloud metro, Juniper provides the hardware and software backbone that moves AI-scale data.
AI for Networking: Their Mist AI-native platform brings AIOps, intent-based automation and real-time insights to network management. That’s AI on the network, not just running through it.
End-to-End Fabric: They offer full-stack solutions across routing, switching, security, campus & branch, data centre, and WAN. Whether it’s a hyperscaler spine-leaf or a smaller enterprise, Juniper’s hardware and software play both ends.
Cloud + AI Native Stack: With 400G-800G ready kit, telemetry-rich pipelines, and hardware built for line-rate processing, Juniper’s stack is increasingly built for real-time AI and cloud-native demands.
Recent Moves
Expanded 800G Capability: Juniper rolled out 800G-capable routers and switches in early 2024 to support ultra-high-throughput, latency-sensitive environments - critical for AI clusters and inference farms.
Hyperscaler Wins: Juniper’s networking infrastructure stacks now power multiple hyperscale deployments, underpinning large GPU clusters with AI-optimised transport.
AIOps for networking: Their Mist AI platform, Marvis AI engine, and Marvis AI Assistant continue to gain traction - automating troubleshooting, performance tuning, and SLA compliance across enterprise, telco, and networks.
Edge Push: They’ve invested heavily in metro cloud infrastructure, designed for edge inference, smart city workloads, and AI data shuttling between central and distributed nodes.
For more from Juniper, premium subscribers can check out the newsroom RSS feed below on the web publication:
What’s Next
2025 is a pressure-test year for Juniper.
Early deployments are already live in Grace Blackwell environments, pairing Juniper’s AIOps tooling with NVIDIA’s BlueField DPUs. Liquid cooling support is coming. So is hardened multi-tenant observability for sovereign and defence workloads. Express 6, Juniper’s next-gen ASIC, is already in testing. It’s designed for 1.6T and 3.2T switching, AI-scale telemetry, and isolated workload slices.
The goal? Own the programmable edge—where observability, latency, and policy enforcement converge.
But the risks are real.
Hyperscalers are building in-house. Arista still dominates cloud switching. Open networking is gaining traction in Asia and other cost-sensitive markets.
Juniper’s counter?
Open ecosystem solutions validated through their AI Lab. Predictable automation. Sovereign-grade support.
No hype. No headlines. Just infrastructure that works under pressure.
Juniper knows the edge won’t be won by whoever moves fastest.
It’ll be won by whoever keeps it online.