Eight NVME Drives RAIDed on AMD Ryzen Threadripper

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,626
der8auer had a video up this morning that showed him running eight NVME drives running in RAID on a Threadripper motherboard. Since then the video has been made private, so NO SOUP FOR YOU! However, we did sneak a couple of screenshots (thanks cageymaru). AMD has stated that NVME RAID support is on the way for Threadripper and this would seem to back that up, but we do not see a new chipset driver on AMD's website. Below is a screen grab of a benchmark run and the hardware used in all its glory, as well as some UEFI documentation.

Check out the pics.

He is using ASUS HYPER cards which support four NVME drives for his testing.
 
Last edited:
Cool, but what would really require that level of performance that would be being performed on a consumer/workstation platform
 
Cool, but what would really require that level of performance that would be being performed on a consumer/workstation platform
You can always say "yes but the one that's slightly slower is enough!"
 
Cool, but what would really require that level of performance that would be being performed on a consumer/workstation platform

You have to be able to feed the beast! Say you were a Youtube content creator with a 8K camera setup. You might want something like this to speed up editing videos.
 
I wonder how fast is the Intel system? They charge a fee to unlock this feature and require that you strictly use Intel SSDs which are 1/2 the speed of the Samsung drives.
 
Anyone want some hardware pron? 27GB/s is pretty quick!

2017-09-25 (26).png 2017-09-25 (18).png 2017-09-25 (25).png
 
remember when people used to say their computer is slow?

f-ast.!

I wonder how fast is the Intel system? They charge a fee to unlock this feature and require that you strictly use Intel SSDs which are 1/2 the speed of the Samsung drives.

what?!
 
I wonder how fast is the Intel system? They charge a fee to unlock this feature and require that you strictly use Intel SSDs which are 1/2 the speed of the Samsung drives.

As I understand it, you only need to use Intel drives to create a bootable array. You don't need them to create a non-bootable array and use it within Windows. Also, this is only if you are using VROC. You can still install M.2 drives and go through the PCH with full bootable NVMe support. I tested this on the GIGABYTE X299 Aorus Gaming 3 which has VROC support using Corsair MP500 SSD's.
 
This is where SAN is going to go.

Removable NVMe disks. Each with 4x PCIe all to itself. 128 lanes on Epyc means 32 disks- let's assume a 4TB capacity. We're looking at 128TB raw space.

Maybe you've got some PCIe overlap since you're going to want 100Gbs/400Gbs cards in there for connectivity.

I guess we'll be waiting for PCIe 4.0 or 5.0 before 1Tbs connections become 'common' in the datacenter at least.
 
You do this because you can not because you need to!

Yeah, I hate how people say "why do you need this, bla, bla, bla. My SATA ssd is fast enough, bla, bla bla." This is what's holding technology back, people asking why it needs to be this fast instead of how it can be faster. That is why the US is falling behind technology wise. Other countries want faster/more advanced stuff, we want what's "good enough".
 
I wonder if that was what was in the 1.60 bios for the ASRock X399 Fatal1ty Gaming Pro board I downloaded last Friday and was pulled. Not that I could test it either way since I reflashed to 1.50 and only have 1 NVMe drive. I remember it had some mention of new NVMe raid options that required you to use the latest drivers to use the NVMe raid but I wasn't thinking what I saw in the screenshots I was thinking they were going to let you just raid the 3 onboard NVMe slots.
 
The cards are probably certified and tested for VROC / X299 and not necessarily any X399 motherboards. That said, the information might change once M.2 RAID functionality is rolled out fully for all X399 motherboards. This will require a BIOS update as the AGESA code version has changed.
 
Cool, but what would really require that level of performance that would be being performed on a consumer/workstation platform
Someone wants to run through their rainbow tables very quickly? Or just spend more on storage, less on memory for your databases? This is pretty cool after all. If you build it, they will come...
 
Yeah, I hate how people say "why do you need this, bla, bla, bla. My SATA ssd is fast enough, bla, bla bla." This is what's holding technology back, people asking why it needs to be this fast instead of how it can be faster. That is why the US is falling behind technology wise. Other countries want faster/more advanced stuff, we want what's "good enough".

You also have to factor in that disk size requirements have stagnated and that at some point it gets hard for people to justify replacing a 512GB drive with another 512GB that is 4x as fast but it's several hundred dollars and offers no real world benefit. CPU's and GPU's certainly still benefit from speed increases, but SSD speeds hit the current sweet spot pretty quickly.

Same reason processors that are 4-5 years old are still good enough for "average" users. The i5 in my work computer is 5 generations old now. Still runs word, outlook, command lines and iTunes like a boss. (Looking it up now, its 5 years old and still 22nm....there is that stagnation).
 
remember when people used to say their computer is slow?

f-ast.!



what?!
not fast, is high bandwidth. fast would be lower access latencies. ;\

There is a perceived difference between the latency of RAM vs latency of non-volatile SSD storage by order of 100
 
You also have to factor in that disk size requirements have stagnated and that at some point it gets hard for people to justify replacing a 512GB drive with another 512GB that is 4x as fast but it's several hundred dollars and offers no real world benefit. CPU's and GPU's certainly still benefit from speed increases, but SSD speeds hit the current sweet spot pretty quickly.

Same reason processors that are 4-5 years old are still good enough for "average" users. The i5 in my work computer is 5 generations old now. Still runs word, outlook, command lines and iTunes like a boss. (Looking it up now, its 5 years old and still 22nm....there is that stagnation).

Intel's i5's haven't gotten much better in the last 4-5 generations either.

When I went from 5400 rpm to 7200 rpm platter drives it was noticeable. Jumping to a SATA II SSD was also noticeable. SATA III was also a meaningful change and all the more so when I put two of them in RAID. Once I eventually switch off my current rig, and move to m2 nvme in RAID it should also be a noticeable improvement.
 
I see it as huge poor man's memory.

Vega hbcc can use such arrays a gpu memory/cache extender. You would be surprised gpu how often ram limits cause problems.

If u find another way to boot, most TRs have 3x nvme onboard Add 3x ~$110 128GB sm961 samsung nvme & viola, about 384 GB of 9GB/s read & 6GB/s write, & no other costs for controllers etc. It compares to an 8 lane gpu's system memory access speed via the bus.

Its better than it sounds, as HBCC intelligently manages this resource as one in a pool, including faster spare system memory.
 
Back
Top