Storage has always been the “underappreciated child” in computing architectures, said Scott Shadley, Director of Leadership Narrative and Technology Evangelist at Solidigm, which develops Solid-State Drives (SSDs) for enterprise.
But AI has caused a step change in the volume and velocity of data being gathered and processed every day. “Even just five years ago, you’d capture a petabyte of data and only keep 100 terabytes of it. Now we want to hold on to everything,” said Shadley.
Until now, decisions about storage have been largely made on a cost per gigabyte basis. Nearly 90% of data center storage still relies on old-fashioned hard disk drives (HDD), which are cheaper to buy than better-performing SSDs.
But HDDs are struggling to keep up with AI workflows, putting a renewed focus on how enormous quantities of data are stored. After all, lightning-quick GPUs can only operate as fast as data can get to them.
Dollars per terabyte
HDDs are “engineering marvels, to be honest,” said Shadley. HDDs were once measured in dollars per gigabyte. But the aging technology has become so efficient that drives now cost around $0.011 per GB. Dollars per terabyte is the only practical math.
And HDDs are likely to remain for some time. Shadley cited the CERN Tape Archive, which stores the vast amounts of data generated by the Large Hadron Collider, as an example of how older storage technologies remain relevant even as they’re technically superseded.
However, the premise that HDDs are the most cost-effective method of mass data storage is beginning to crack. In its recent white paper, The Economics of Exabyte Storage, Solidigm demonstrated a lower Total Cost of Ownership (TCO) for SSDs over 10 years, storing one exabyte (one million terabytes).
SSDs may cost more upfront, but they are more cost-effective in the long run, using less space, consuming less energy, and offering better reliability.
And where performance is an issue, even the slowest SSD outperforms the fastest HDD — offering a speed that some data-intensive workflows simply wouldn’t work without.
Real-time performance
One of the many research areas at Los Alamos National Labs (LANL) is simulating seismic activity from underground nuclear explosions, so that weapons tests can be detected around the globe.
This process generates incredible amounts of data that need near-instantaneous capture and often simultaneous analysis. HDDs simply can’t keep up with this kind of intensive read-write workflow.
In reading, the drive’s disk head must locate where the data is stored on its platter, spinning to that position to retrieve it. This introduces a latency that fluctuates depending on the data location. And writing requires spinning again to find a blank area.
This isn’t necessarily an issue in slower Big Data workflows, such as parsing long tails of traffic cam footage. But Shadley says, “that speed is not fast enough for what the AI factories of the world are going to be needing.”
Processes like the LANL experiments simply wouldn’t work without SSDs that can write and read in parallel, in near-real-time, at predictable speeds.
It’s a glimpse into the kind of data processing AI is rendering commonplace, and will accelerate as the technology matures — requiring even better storage solutions.
Evolving data storage
“From a capacity point of view, hard drives have hit a wall,” said Shadley. Today, the largest HDDs are around 30 TB and expected to increase to 100 TB by 2030.
But Solidigm already ships 122TB SSDs, which are physically smaller, with plenty of scope for higher density — more storage in the same space — or entirely new form factors.
For example, Solidigm worked with NVIDIA to address eSSD liquid-cooling challenges, “addressing issues like hot swap-ability and single-side cooling constraints”, said Shadley.
The resulting product is a “liquid-cooled, direct to chip, cold plate, hot pluggable SSD that [doesn’t] take any extra footprint in the server,” said Shadley.
It is the first cold-plate cooled enterprise SSD available for reference architectures, demonstrated at NVIDIA’s annual GPU Technology Conference (GTC) in March 2025.
Other innovations are on the horizon. Solidigm is working with many OEMs on solutions where speed isn’t a priority, but SSDs’ reliability, smaller footprint, and lower energy draw are advantageous.
One key benefit could be freeing up resources to redirect elsewhere. Replacing data center HDDs with SSDs can deliver up to 77% power savings, using 90% less rack space — making more watts available for GPUs, for example.
Keeping up with AI
Ultimately, serving GPUs is the big challenge in AI computing. Everything upstream must keep pace, or that GPU is not working to its full potential.
“We really need to start paying more attention to that lake of data that happens to sit in storage,” said Shadley. After all, the lake is where the pipeline starts.
Learn more about how to make sure your data infrastructure is built on a solid foundation.
