E1 and E3 EDSFF to Take Over from M.2 and 2.5 in SSDs

MrGuvernment

Fully [H]
Joined
Aug 3, 2004
Messages
21,796
https://www.servethehome.com/e1-and-e3-edsff-to-take-over-from-m-2-and-2-5-in-ssds-kioxia/

In the not too distant future (2022) we are going to see a rapid transition away from two beloved SSD form factors in servers. Both the M.2 and 2.5″ SSD form factors have been around for many years. As we transition into the PCIe Gen5 SSD era, EDSFF is going to be what many of STH’s community will want. Instead of M.2 and 2.5″ we are going to have E1.S, E1.L, E3.S, and E3.L along with a mm designation that means something different than it does with M.2 SSDs. If that sounds confusing, you are in luck. At STH, we managed to grab some drives (thanks to Kioxia for the help here) to be able to show you exactly how the world of storage and some PCIe memory/ accelerators will work.
 
I smell a potential for discounted U.2 and M.2 optanes in the near future.

I honestly don't understand why they do this. Is there really a need in servers that makes this form factor change necessary, or is it just arbitrary?

The more standards you have, in general the worse things are as volumes are lower and thus economies of scale conspire to make the same things more expensive...
 
I smell a potential for discounted U.2 and M.2 optanes in the near future.

I honestly don't understand why they do this. Is there really a need in servers that makes this form factor change necessary, or is it just arbitrary?

The more standards you have, in general the worse things are as volumes are lower and thus economies of scale conspire to make the same things more expensive...
This is primarily for density and capacity efficiency:

tel-EDSFF-1U-E1-Mechanical-Fit-Study-2021-1536x778.jpg

ok-Yosemite-V3-EDSFF-E1.S-25mm-Thermal-Performance.jpg
 
Yeah, well, I would prefer less arbitrary standards, but if this is server only maybe that is okay.

Still would prefer it working with existing standards. Like making an SSD in the 3.5" form factor maybe they could fit the cooling parts, etc.
 
Last edited:
Is there really a need in servers that makes this form factor change necessary, or is it just arbitrary?
If you read the article, they provide a reason (or a rationalization, if you prefer): more efficient use of space.
 
The additional height on E3 also allows for an x8 connector which may be important if one needed more lanes.

I find this interesting since several years ago when PCIe5 was first being talked up, in regards to storage I remember reading that the interest was in being able to have x1 drives with the same performance as current generation x4's to cram more total drives onto a single server CPU rather than having a similar number of significantly faster individual drives.
 
I find this interesting since several years ago when PCIe5 was first being talked up, in regards to storage I remember reading that the interest was in being able to have x1 drives with the same performance as current generation x4's to cram more total drives onto a single server CPU rather than having a similar number of significantly faster individual drives.
In the consumer space, it still is, there are lots of talks about how it will let OEM's and AIB manufacturers can go from 16 lanes to the GPU and 4 to the M.2 and dial it back to 8 and 2 respectively and then use those 10 lanes to go to other equipment or just leave them out to cut costs. But enterprise storage is a different beast and AI especially where it's all about how fast can you feed the data in. Those PCIe 5 rulers combined with the Optane ram modules are going to make for AI beasts, IF Intel's server GPU's are competitive against NVidia's A100's then Intel has a platform in place that will make NVidia feel some serious hurt.
 
It has to do with profit. Once something becomes a standard, the widespread competition reduces profit margins close to zero.
Introducing something new provides a few years of reduced competition and higher margins.
 
As noted in the video the whole latch issue so OEM can all make their own... like just make a standard and be done with it, but OEM's want to be able to charge a fortune for their drive sleds / latches.
 
I find this interesting since several years ago when PCIe5 was first being talked up, in regards to storage I remember reading that the interest was in being able to have x1 drives with the same performance as current generation x4's to cram more total drives onto a single server CPU rather than having a similar number of significantly faster individual drives.
Good catch, didn't notice that.

" The E3 family connector is designed for x4 to x16 PCIe lanes and power envelopes up to 70W."

https://www.snia.org/forums/cmsi/knowledge/formfactors
 
I smell a potential for discounted U.2 and M.2 optanes in the near future.

I doubt it. New ones are largely produced in line with demand. Existing ones will be retired and sold/destroyed with the servers they're in on whatever the planned life cycle is. Data center scale businesses don't play games with moving parts from one server to the next. They just buy them fully assembled and ready to slam into the rack; or in fully assembles racks (or larger assemblies) that just need power/networking plugged in prior to being powered up.
 
Back
Top