GPU clusters don't run themselves.
Someone has to provision them. Isolate tenants. Manage the network. Handle upgrades. Meter consumption. The hyperscalers do it with proprietary stacks. Red Hat does it with OpenShift. VMware did it until Broadcom made everyone nervous.
Neoclouds need the same capabilities without the lock-in.
They need multi-tenant GPU isolation, sovereign deployment controls, and Day 2 operations that don't require rebuilding the platform every time NVIDIA ships new silicon.
This company has spent fifteen years building exactly that. They pioneered running OpenStack on Kubernetes. They built a Kubernetes IDE with over a million users. Now they're betting the company on becoming the operating layer for AI infrastructure.
Who are they?
Quick Links
The GPU Audio Companion Issue #90
Want the GPU breakdown without the reading? The Audio Companion does it for you, but only if you’re subscribed. If you can’t see it below, click here to fix that.
What is Mirantis?
Mirantis is a platform company.
Not a GPU cloud. Not a hyperscaler. A software layer that turns raw GPUs into a multi-tenant, sovereign-ready AI cloud, handling provisioning, isolation, metering, and lifecycle management so operators don't have to build it themselves.
The company emerged in 2011 to make open-source infrastructure work in production.
Co-founders Alex Freedland and Boris Renski helped form the OpenStack Foundation and built Mirantis into the largest independent OpenStack integrator.
When the market shifted to containers, they acquired Docker Enterprise in November 2019. That netted them the IP and talent, 750 enterprise customers, and a third of the Fortune 100. Then they modernised that stack with a single-binary distro (k0s - open sourced and governed by the CNCF) that runs on Raspberry Pis, scaled it to thousands of nodes, rebranded it as Mirantis Kubernetes Engine (MKE) and built from there.
Today, ~500 employees operate globally, including infrastructure and AI/ML engineers from Google, HP, Dell, Juniper Networks, and Docker.
The company is cash-flow positive with ~80% recurring revenue and, as of January 2024, Alex Freeland is back at the helm to lead the AI infrastructure pivot.
What Problems Does Mirantis Solve?
Neoclouds are running into three problems simultaneously:
Tech stack complexity. Divergent tooling, APIs, and security controls across providers and regions. Inconsistent networking, identity, and storage patterns. Hardware and platforms proliferate faster than teams can standardise. Every new GPU generation forces another round of integration work.
Operational efficiency. No centralised view of cluster health, state, cost, and risk across environments. Configuration drift creates instability and compliance gaps. Fragmented Day 2 tooling increases manual work. GitOps becomes brittle when environments aren't consistent.
Customer experience. Slow time-to-value onboarding new GPU capacity. Difficulty maximising utilisation while meeting strict tenant requirements. Hard multi-tenancy raises the bar for reliability, isolation, and control. Portability expectations collide with provider-specific differences.
The regulatory stakes compound all of this. European neoclouds need data privacy and sovereignty controls, consistent audit trails, and provable cybersecurity compliance (DORA, NIS2) across every environment. US government buyers need FIPS 140 and DISA STIG. These are baseline requirements for the enterprise customers neoclouds are targeting next.
How Mirantis Solves these Problems
Mirantis treats infrastructure and services as templates that reconcile continuously into running environments - define once, deploy everywhere, control drift, manage upgrades. To deliver this, they position a three-stage progression for neocloud operators:
Stage 1: Bare metal Kubernetes + managed services. Expand margin by offering value-added AI services and managed platforms (e.g., SLURM, Ray, and inference services).
Stage 2: Kubernetes + virtualisation - optimise economics. Move utilisation toward 65–80% by right-sizing and packing workloads, while converging platforms with modern virtualisation.
Stage 3: Turnkey AI cloud - hyperscaler-like experience. Optimised customer experience through self-service and automation.
k0rdent AI is the platform layer that makes this work. Instead of rebuilding the operating model every time you scale or onboard a new GPU generation, neoclouds get provisioning, isolation, metering, and lifecycle management handled across clusters and sites. The same stack applies to telcos, data centres, and MSPs entering the AI market, and to enterprises that need governance and sovereignty controls without hyperscaler dependency.
Who Runs Mirantis?
Alex Freedland - CEO & Co-founder. Returned January 2024. OpenStack Foundation Board member since inception. Business Insider "39 Most Important People in Cloud Computing" (2014).
Shaun O'Meara - CTO. Joined 2015. 20+ years enterprise infrastructure. Leads platform direction and AI infrastructure strategy.
Jerry Ibrahim - Head of Engineering. Joined June 2025. Former IT CTO at VMware. Executive roles at Tesla (Gigafactory 1), Align Technology, Juniper Networks.
Kevin Kamel - VP of Product Management. Owns k0rdent AI product definition. Emphasis on Kubernetes-native, composable operations for AI platforms.
Jason Bobb - SVP Sales, k0rdent AI. Career sales leader in cloud and infrastructure markets. Now leading global sales.
What is Mirantis' Competitive Edge?
Hardware-enforced tenant isolation. Multi-tenancy is enforced through hardware and virtualisation layers. The stack combines GPU partitioning, virtualisation, and secure multi-tenant networking with NVIDIA BlueField DPUs where applicable.
Multi-accelerator support and GPU efficiency. NVIDIA, AMD, Intel GPUs with no single-vendor lock. Slicing, placement, and allocation strategies designed to improve utilisation while maintaining performance isolation. Observability and FinOps are built in, allowing operators to attribute consumption by tenant and support billing-grade reporting.
Kubernetes-native composability. Infrastructure and services are defined as templates and reconciled continuously into running environments. This supports repeatable rollout across many clusters and sites, with controlled drift and controlled upgrades. O'Meara frames this as "composable, observable, scalable" platforms for the AI era.
Build Operate Transfer delivery model. Mirantis can deliver the system with defined operational practices and SLA-based management, then transition to enterprise support. The goal is to remove the staffing and integration burden that blocks AI infrastructure projects. 100+ engineers across 15 countries support design, deployment, and managed services.
Compliance-ready. ISO 9001, ISO 27001, PCI DSS. FedRAMP, STIG/DoD, and FIPSfor government and high-assurance deployments.
Recent Moves
December 2025 - Joined Linux Foundation's Agentic AI Foundation as Silver Member.
December 2025 - k0rdent reached 92 validated infrastructure integrations (AWS, Azure, OpenStack, vSphere).
December 2025 - Launched MCP AdaptiveOps Services for enterprise Model Context Protocol server deployment.
October 2025 - Selected as software infrastructure partner for NVIDIA AI Factory for Government reference design.
October 2025 - Announced Integration with NVIDIA Bluefield for next generation AI infrastructure.
October 2025 - Partnership with ThisWay Global for defence and sovereign AI deployments.
May 2025 - k0rdent AI is launched, offering Neoclouds a ‘metal to model’ solution for platform engineering and operations that comprehends the whole AI stack, from GPUs up through models and application components.
March 2025 - Partnership with Gcore for global AI inference deployment.
January 2024 - Co-founder Alex Freedland returned as CEO to lead Mirantis’ pivot to AI.
What's Next for Mirantis?
The roadmap has five priorities:
GPU efficiency delivered at scale. Partitioning, topology-aware scheduling, isolation, and fairness as first-class, observable controls. The goal is to make utilisation, placement policy, and multi-tenancy economically advantageous for Neoclouds as they build out multi-tenant environments and innovate to provide customers with high-value services.
Commercial services builder. Enable neocloud operators to define AI services, set pricing, and publish to a catalogue. Turn GPU infrastructure into a productised service business.
Distributed inference. Intelligent routing of inference requests to appropriate edge and regional GPU resources. Balance latency, cost, and utilisation across distributed GPU infra.
Agentic infrastructure. MCP enablement integrated with deployment patterns, lifecycle management, and guardrails. Run agentic systems safely and repeatably at scale.
Practitioner solutions. Moving up the value stack with supported solutions for training and AI application hosting, helping teams run real workloads while maintaining operational discipline.
The bets underlying all of this are twofold:
AI inference will dwarf training - Alex Freedland has publicly stated he believes it will be 50x larger.
Neoclouds, sovereign cloud builders, and enterprises will need a platform layer that doesn't force them to rebuild from scratch every hardware generation, because, per Shaun O'Meara's framing: "All infrastructure is AI infrastructure."
The risk is market position.
Gartner puts Mirantis in the Challenger quadrant. Google, Microsoft, AWS, Red Hat, and SUSE are all Leaders. 6sense data shows MKE at 0.11% market share versus Red Hat OpenShift at 0.82%. Omdia called Mirantis "the only independent company recognised as a Leader" in container management.
But independence can also be a resource constraint against hyperscaler-backed competitors.
That narrows the addressable market, but it also sharpens the pitch.
The target is neoclouds building sovereign AI platforms, regional cloud providers entering the AI market, and enterprises running VMware who need an exit strategy.
Mirantis is betting that operators burned by lock-in will pay for neutrality.
The next two years will answer it.
If all infrastructure really is AI infrastructure, someone has to run it. Mirantis wants to be that someone. And unlike most companies at this stage, they're profitable, open source, and already embedded in Fortune 100 infrastructure through the Docker Enterprise acquisition. Whether that's enough remains to be seen.
But if any company has the foundations to try, it's one that's been building platform infrastructure since before Kubernetes existed.


