The all-flash consensus is underwater.

The cause is structural, not cyclical. AI inference demand has outstripped storage supply. Memory manufacturers are shifting NAND production lines to DRAM, where margins are fatter. North American hyperscalers are stockpiling. Short-term capacity tightness is "difficult to resolve."

The economics made sense when NAND was cheap, and capacity was abundant. Neither holds. The organisations locked into all-flash architectures are now exposed.

One storage company saw this coming.

Who are they?

What is VDURA?

VDURA is a modern data storage infrastructure software built for AI factories and Neocloud operators.

Not a general-purpose filer. Not an all-flash array betting on stable SSD pricing. A true parallel file system with 25 years of production hardening, now re-architected around HYDRA - a hyperscale-inspired, software-defined platform delivering distributed metadata, extreme durability, and hybrid economics under a single namespace.

The company targets three buyer profiles:

  • Neocloud operators: GPU cloud providers who must maximise utilisation, adapt to changing workloads, and control costs as data volumes grow exponentially.

  • AI factories: Large-scale training and inference environments running continuous production workloads.

  • Traditional HPC: Life sciences, energy, manufacturing, federal, research - the verticals where Panasas built its name across 1,000+ deployments in 50+ countries.

VDURA is Panasas evolved. Founded in 1999 by Garth Gibson, who co-invented RAID at UC Berkeley and built Carnegie Mellon's Parallel Data Lab. The company shipped the first enterprise-grade parallel file system, pioneered file-level erasure coding, and deployed at NASA, RTX, Boeing, and Airbus. That production-hardened foundation now runs as VDURA's software platform.

On May 7, 2024, Panasas rebranded to VDURA and pivoted from proprietary appliances to software-defined storage with subscription pricing. The company has raised over $150 million to date. 

What Problem Does VDURA Solve?

AI workloads have outgrown general-purpose storage. VDURA breaks the problem down into three structural failures:

  • Storage stalls kill GPU economics. Any bottleneck in data delivery leaves expensive accelerators sitting idle - a direct hit to Neocloud margins.

  • Centralised metadata cannot scale. Training and inference pipelines involve thousands of concurrent reads, writes, and checkpoints. Systems that serialise metadata access or rely on single-controller architectures collapse under AI concurrency.

  • All-flash economics are volatile. VDURA believes SSD pricing follows boom-bust cycles that organisations cannot forecast over a three-year planning horizon. HDD pricing moved 35% over the same period, and the flash market surged - a structural gap that punishes rigid all-flash commitments.

VDURA solves this with HYDRA - a hybrid-native, distributed architecture built for AI-scale parallelism.

HYDRA delivers true parallelism across both data and metadata, supports flash-first performance where GPUs demand it, and enables scalable capacity tiers for checkpoints and retention under a single namespace. 

Performance and capacity scale independently. The architecture tolerates failure as a normal operating condition.

For Neocloud operators, this eliminates the need to stitch together multiple storage systems across different pipeline stages, enabling the entire AI lifecycle to run on a single platform.

Who Runs VDURA?

What is VDURA's competitive edge?

Recent Moves

What's Next for VDURA?

Data Platform V12 will reach general availability in the coming months.

This will bring the elastic metadata engine, system-wide snapshots, and SMR optimization into production. Higher concurrency and faster checkpointing are the goal here, unlocking simpler operations for multi-tenant Neocloud environments. Multi-tenant isolation, sustained GPU utilisation under mixed workloads, and operational tooling built for always-on AI services rather than batch-only environments are also on the roadmap.

Partner-led and delivered reference architectures will also expand beyond AMD with VDURA planning validated AI platforms across diverse GPU, server, and networking ecosystems.

The bet underneath all of this is that AI infrastructure is a lifecycle problem, not a storage SKU decision. 

The faster organisations move models from ingest to training to inference, the faster they generate revenue. VDURA is positioning to own that compression.

The risk is market position. 

Competitors are landing billion-dollar-plus contracts on the one hand, and raising billion-dollar-plus funding rounds on the other.  They have ecosystem momentum and enterprise sales motion that VDURA's $150M in total funding cannot match directly.

The Flash Relief Program's 50% undercut is aggressive positioning, but guarantees only work if buyers comparison-shop - and many default to the all-flash incumbents without looking.

The strongest signal in VDURA's favour, however, is production scale. 

20PB+ deployments running 800+ GB/s sustained throughput. Exabytes managed globally across thousands of deployments. Parallel file system maturity that competitors cannot easily replicate with funding alone.

Garth Gibson's return is an indicator of the depth of conviction VDURA’s leadership has in their chosen direction of travel - the HYDRA architecture is a deliberate break from legacy storage assumptions. 

Given today’s conventions, the question is whether Neocloud operators and AI factory builders will bet on purpose-built infrastructure over the all-flash consensus - and whether VDURA can capture that demand before flash pricing normalises and the economic argument narrows.

Reply

Avatar

or to participate

Keep Reading