Everything You Need to Know About Edge Storage

Warehouse robot using edge solution with Solidigm SSDs.
Warehouse robot using edge solution with Solidigm SSDs.

Edge AI’s storage imperative

Picture a half-depth, single-socket server bolted above a grocery-store freezer at 40° C. It hums quietly as a small GPU works through multimodal prompts in real time. This is the practical edge: a few hundred watts to share, a shoebox footprint, and barely any airflow. In that setting, storage, not compute, determines whether inference stays smooth or stalls.

This edge bottleneck manifests in specific ways, with strict power limits, cramped space, warm ambient air, and fast-growing model files. Field results from Antillion’s wearable servers, Zhengrui’s livestock genomics platform, and PEAK:AIO’s on-premises AI clusters show how replacing older legacy SSD or hard-disk tiers with the latest Solidigm™ drives trims footprints, slashes power use, and keeps GPUs fed. Whether the application sits in a storefront clinic, an industrial yard, or a rural substation, the right choice of storage can turn tough edge math into measurable performance headroom.

Why the edge is harder than the core

A hyperscale rack in a cloud hall can draw more than 10 kW and sits under rows of chillers. By contrast, an edge node might be wedged into a broom closet or a pole-mount cabinet where every watt, cubic inch, and degree matters. Many purpose-built edge servers ship with power supplies rated at only 200 W to 300 W, a ceiling that must cover CPU, GPU, networking, and storage. Core racks now average about 12 kW and are moving much higher for AI clusters, an order of magnitude more headroom than the edge can expect.

Space is equally constrained. A short-depth one-unit chassis may expose just two 2.5-inch drive bays and a pair of E1.S drive slots, so capacity per system directly limits the size of local models and data sets. Because spare slots rarely exist, scaling them up later often means a forklift upgrade rather than a simple drive swap. As a result, getting capacity right from day one is critical to both performance headroom and ROI.

Cooling is harsh as well. Rugged edge systems such as Lenovo’s ThinkEdge SE450 are qualified to run continuously at inlet temperatures up to 45° C, well above the 30° C target in most data center cold aisles. In such air, any drive that burns 10 W or more can push a small enclosure out of spec. Fans must spin slower to stay within acoustic limits for storefronts, clinics, and retail floors, so every component has to shed heat efficiently on its own.

Bandwidth does not make up the difference. Where cloud servers stream data from petabyte (PB) storage fabrics, edge boxes often rely on a single 1 GbE circuit or a shared 5G link. Models, embeddings, video buffers, and logs, therefore, have to sit inches from the GPU, not miles away. Service calls are infrequent, so drives must hold up for years without attention. 

Solidigm SSD storage pillars of endurance, density and throughput for edge storage solutions.

These combined limits rewrite the storage checklist. Edge builders need capacities that rival the cloud yet fit in one or two E1.S drive slots, write endurance that soaks bursty sensor traffic while staying under a few watts, and read speeds that keep GPUs busy even when the cabinet air is forty-plus degrees.

Performance vs. capacity: Workload-first design

Some edge workloads live and die by performance. Live video analytics, real-time genomic matching, and rapid retrieval-augmented generation demand tens of gigabytes per second of sustained read bandwidth with microsecond latency. Others, including autonomous-vehicle data capture or on-site compliance logging, value capacity more in order to retain weeks or months of context. Whatever the mix, designers still have to fit inside the same tight power envelope and cramped enclosure.

The art of efficiency is choosing endurance, throughput, and density that align with actual traffic patterns while leaving enough thermal headroom for CPUs, GPUs, and networking.

The anatomy of a balanced edge stack

Most successful designs layer storage so each tier plays to its strength.

  • Write-burst cache — Absorbs unruly ingest traffic without premature wear
  • Working tier — Feeds accelerators with predictable throughput and latency
  • Capacity tier — Parks context files while sipping power at idle

Because each layer is sized for its real workload, a representative stack with Solidigm SSDs can fit in fewer than eight bays, draw less than 60 W, and cope with ambient air in the low 40° C range. The same pattern ingests live DICOM images in a clinic, sifts retail video in a store, or archives a year of substation waveforms without exhausting the cabinet.

The economics reinforce the design. Swapping a dozen low-capacity drives for three Solidigm high-capacity NVMe SSDs could cut drive count by 75%, trim roughly 150 W of heat, and lower storage math by about 25%. Those saved watts and slots turn into headroom for extra GPUs, faster network cards, or simply quieter fans.

Real-world lessons from the field

Antillion first responders

Antillion builds miniature edge computers that field crews wear on a vest. Early versions relied on 2.5-inch SATA SSDs, which limited both capacity and throughput. The company replaced those disks with Solidigm E1.S NVMe SSDs from the high-performance SSD D7 Series family of drives. The swap more than doubled streaming bandwidth for high-resolution video and sensor feeds, cut system-build times by about 30% during software deployment, and, after hundreds of units shipped, produced zero drive failures in the field. Innovative Solidigm E1.S drives let the Pace A2 tactical node carry large data sets without adding weight, proving that rugged edge gear no longer has to trade capacity for size. Learn more about our collaboration with Antillion in this article: Antillion and Solidigm: Driving Innovation at the Edge.

Zhengrui Technology livestock genomics

In Sichuan, Zhengrui Technology runs an animal-husbandry analytics platform that ingests genomic sequences, phenotype images, and environmental telemetry. A single two-unit server filled with twenty-four Solidigm D5-P5336 high-capacity drives now holds roughly 700TB and sustains about 1 million random IOPS. Moving from hybrid HDD and low-capacity SSD storage to an all high-capacity Solidigm SSD configuration reduced rack space and storage power by 79%, which freed budget and thermal headroom for additional GPUs that train disease-prediction and breeding-value models on site. Learn more about our collaboration with Zhengrui in this article: Zhengrui Technology and Solidigm SSDs.

PEAK:AIO research clusters 

PEAK:AIO partnered with Dell to build a two-unit AI data server that serves 120 GB per second over NVIDIA ConnectX-7 adapters. The system reaches that speed by filling all twenty-four NVMe bays with Solidigm 61.44TB high-capacity SSDs, delivering 1.5PB in a chassis small enough for satellite labs and regional clinics. In power-budget studies, the same approach showed savings of 10 to 20 MW in a 50 MW data center model, headroom that lets operators add about 50% more GPU capacity without raising the site’s total draw. Learn more about our collaboration with PEAK:AIO in this article: PEAK:AIO, MONAI, and Solidigm: Revolutionizing Storage for Medical AI.

Taken together, these deployments confirm the pattern. When storage density rises and power per TB falls, edge servers shrink, operating costs drop, and accelerators stay busy instead of waiting for data.

Mapping the solidigm portfolio to edge needs

Solidigm D7-P5810 for caching tier storage, Solidigm D7-PS-1010 for performance tier storage, Solidigm D5-P5336 for capacity tier storage.

The table below illustrates how three drive options align with the cache, working, and capacity roles inside a power-limited edge server.

Attribute Solidigm™ D7-P5810 Solidigm™ D7-PS1010 Solidigm™ D5-P5336
Category Caching Tier Performance Tier Capacity Tier
Edge AI Role Write-burst cache Hot model and index store  Large context and retention
Interface PCIe 4.0 x4 PCIe 5.0 x4 PCIe 4.0 x4
Form factors U.2  15 mm E1.S 9.5 mm,
E1.S 15 mm,
E3.S 7.5 mm,
U.2 15 mm.    
U.2 15 mm,
E3.S 7.5 mm,
E1.L 9.5 mm
Sequential Read ≈ 6.4 GB/s ≈ 14.5 GB/s ≈ 7 GB/s
Random Read ≈ 0.9 M IOPS ≈ 3.1 M IOPS ≈ 1 M IOPS
Capacity Range 0.8–1.6 TB 1.92–15.36 TB 7.68–122.88 TB
Endurance (DWPD, 5 yr)   ≈ 50 1.0 ≈ 0.5
Idle / Active Power 5 W / < 10 W 5 W / 23 W (avg) 5 W / ≈ 25 W

A practical layout pairs a couple of cache-tier SSD drives for ingest spikes, a small bank of Gen5 performance-tier SSD drives for the active set, and one or two high-capacity SSD drives for context data. Matching endurance to write pressure, bandwidth to accelerator demand, and density to retention needs converts edge constraints into breathing room. With the right Solidigm tiers in place, cloud-class AI can live at the storefront, on the factory floor, or beside a rural feeder line without hauling a full data center budget along.

Conclusion: Storage as the edge enabler

Edge computing forces AI into spaces once meant for routers and fuse boxes. In those tight quarters the flash you choose governs how much data reaches the GPU, how cool the chassis runs, and how soon the investment pays off. Field deployments show a repeatable pattern: 

  • Cache-tier SSDs for unruly writes
  • Gen5 performance SSDs for the working set
  • High-capacity SSDs for everything that must stay local

Antillion’s tactical rigs, Zhengrui’s livestock genomics cluster, and PEAK:AIO’s research servers each followed that pattern and recorded smaller footprints, lower energy use, and steadier accelerator utilization.

Demand will only climb. Models keep expanding, multimodal inference widens access patterns, and analytics stacks drive deeper read bursts. Storage therefore has to grow along three vectors at once: more terabytes per slot, faster pipes into GPU memory, and smarter tiering that adapts to changing workloads. 

The Solidigm innovative, market leading roadmap already moves that direction with liquid-cooled cold-plate enabled E1.S SSDs, full-speed PCIe Gen5 lanes, and firmware aimed at near-device data reduction. As those features mature, heavier AI pipelines can leave the data center and run where the sensors are.


About the Author

Jeff Harthorn is Marketing Analyst for AI Data Infrastructure at Solidigm. Jeff brings hands-on experience in solutions architecture, product planning, and marketing. He shapes corporate AI messaging, including competitive studies on liquid-cooled E1.S SSDs, translating geek-level detail into crisp business values for our customers and collaborative partners. Jeff holds a Bachelor of Science in Computer Engineering from California State University, Sacramento.