PCi-E Bifurcation on the Asus Pro WS WRX80E-SAGE SE WIFI - how crazy can I actually get?

RazorWind

Supreme [H]ardness
Joined
Feb 11, 2001
Messages
4,675
I have a somewhat unusual use case. I need to replace the system I built in this thread with a more up to date one, preferably eliminating the two peli-racks with the disks. What I'd like to do, instead of the hardware RAID controllers, SAS expanders, and arrays of hard drives, is just plug a whole bunch of NVME SSDs into a single motherboard. It appears that all the individual pieces that might be required for this exist - I've found a motherboard and CPU combination that theoretically has enough PCI-E lanes and the necessary arrangement of slots, PCI-E breakout cards, and there are obviously plenty of NVME SSDs to choose from.

Here's the motherboard I'd like to use:
https://www.newegg.com/asus-pro-ws-wrx80e-sage-se-wifi/p/N82E16813119391?Item=N82E16813119391

Here's the breakout card:
https://www.newegg.com/asus-90mc08a...-2-card/p/N82E16815293047?Item=9SIA25VH161402

Question is, though, can I plug in four of those breakout cards, fully populated, plus a graphics card, and reasonably expect it to work?

I downloaded and checked the manual, and what it says is that you can set up a RAID array with up to 10 SSDs. What I take that to mean is that the chipset/cpu-level "hardware" RAID support is for up to 10 drives, but it's not saying whether having sixteen individual drives is supported. I don't actually need the hardware RAID - I can manage that with software in the OS with Windows Storage Spaces. Also, capacity isn't that big of an issue. The original 40TB arrays ended up being way bigger than we actually needed.

Anyone have any insight or experience with this?

As a sidebar, the system from the original thread served us well for the last ten years, and would probably still work, except our next project is in a very remote location where shipping even one of the peli-racks is going to be challenging.
 
From what I understand, yes, you should be able to run that many single drives. I can do 9 on my base sTRX board ( but I lose all the SATA ports I think) so I don’t see why a WRX board wouldn’t support more.
 
Interesting...

I have both that board and I've run a couple of those cards at the same time. Each PCIe slot can be configured for what ASUS refers to as RAID mode (which is basically 4x4 bifurcation). It does not RAID the drives (kind of misleading given the name of the setting). There are plenty of processor PCIe lines to support that many drives. The only thing you might run into is some BIOS / UEFI limit. No idea if you can actually raid them in the hardware.

The cards (one is included with the board) are just bifurcated M2 slots. The cards do a wonderful job of keeping the NVME drives cool.

What kind of video card are you going to run? Something like a 3090ti and 4 NVME cards might be a tight fit. The board does have basic video (i.e. VGA server type).

Might want to consider power with that many cards. The motherboard requires 2 - 6 pin PCIe power connections, 1 - 8 pin PCIe power connection, 2 - 8 pin CPU power connections and the 24 ATX power connection. The cards themselves only pull power from the PCIe slot.

It's a beast of a board. Only real complaint is the onboard USB audio sometimes drops out and requires a reboot to fix. I think I have that fixed by disabling power savings on the root hub it is connected to.

If you plan to use the octo-channel memory to it's maximum, I think you need at least a 32 core CPU to access the full memory width at the same time. With smaller core counts it configures to multiple - quad channels.

It would not surprise me that you might hit some saturation limit with that many drives and some part of the processor architecture. Might be better off with fewer / bigger drives. But I'm just guessing.

Curious to see how it goes. If you have a specific question on the board or card, let me know.

Good luck.
 
Thanks guys!

I ended up having to basically YOLO this and just assume that it was going to work, since we were running out of time to get the components ordered and the system assembled. Here's what I ended up with:

Threadripper Pro 3955
Asus WRX80 Sage Mouthful Wifi
128GB Corsair Vengeance 3200MHz
21x Western Digital SN850 2TB SSDs (plus two more, as spares)
5x Asus Hyper M.2 breakout cards (plus one more as a spare)
EVGA 3080 Ti - The "XC" reference model, which is a 2.25 slot card
Seasonic Prime 1300w power supply
Noctua NH-U9S STR4

The good news is, it works pretty much exactly as I expected it to, with the only real snag being that, when doing what appears to be the equivalent of RAID 5 with Windows Storage Spaces, you lose way more than one drive's worth of usable capacity to parity, as if it's using two or three for the parity. I may need to spend some more time with that, to see if I can adjust it.

The PCI-E bifurcation works pretty much exactly as one would expect, though. There's a settings page in the BIOS that allows you to configure how each of the seven PCI-E slots gets divided up. The default is a normal x16 slot, but you also have the option of x8/x8 and what Asus refers to as "NVME Raid Mode," which is actually just x4/x4/x4/x4. This allows you to run the NVME drives as individual drives. It's important, though, not to enable the SATA "NVME Raid Mode" option, which I gather is type of "hardware" raid, because that limits you to ten NVME drives in the entire system, regardless of whether they're participating in a RAID or not.

Anyway, the parts arrived late last week, so I got to work on Monday assembling the system. Here it is mid-assembly.
IMG_0731.JPG

I had to unpack all 21 SSDs by hand. It produced a freaking mountain of packaging waste. I probably should have tried harder to find a way to buy them in bulk, but it's not like I needed that many.
IMG_0732.JPGIMG_0733.JPG

Here's the first pass at it, all the way assembled. I had planned to use five of the "Hyper" cards, but the graphics card was too big, and I couldn't fit the fifth. For the purpose of making sure all the parts worked, though, this was good enough.
IMG_E0734.JPG

I really wanted to get that fifth expansion card in there, though, so I brought the system home yesterday evening, took it apart, and did some rearranging. The idea being that, since this board has seven PCI-E x16 slots, the graphics card doesn't necessarily need to be in the top one. In fact, it could just as easily be in #6, which would leave 1-5 free for my expansion cards. That left me with a problem, though. The card would physically fit in there like that, but there's very little space between it and the case wall for cooling air to pass.
graphics card clearance.jpg

I don't think we'll use the graphics card all that hard in practice, but it made me uneasy. I at least need it to not overheat. The solution was obvious...

...old skool case mod! I haven't cut a hole in case since high school.

holes.jpgholes2.jpg

And here it is fully assembled with all five cards.
final assembly.jpg

And then in the peli-rack. It's heavy as shit now. It has to be at least 100 pounds. Gonna have to tuck my hernia back in before I lift it into the car.
pelirack.jpg


Disk performance is... adequate. :D
The limiting factor is likely to be the read speed from the SATA SSDs that the data comes from, so while I suspect that I could get these numbers higher by adjusting things, I doubt the speed of the NVME arrays will ever really be a limiting factor.
diskmark.png
 

Attachments

  • IMG_0734.JPG
    IMG_0734.JPG
    519.5 KB · Views: 0
Curious to see how it goes. If you have a specific question on the board or card, let me know.
Actually, I do have a question for you. Have you ever been able to find or get a driver for this board's SATA controllers?

They're some kind of ASMedia 106X chipset, but Windows just detects them as generic SATA controllers. I'd really like to have the actual drivers, so I can enable the hot plug behavior for the little hot swap bays on the front, but I can't find any that actually work. Other than that, this board is pretty epic. I kinda want one for my work from home rig.
 
Nice build! Kind of insane. Thanks for posting the details.

Depending on your needs, you could change out the GPU for a Quadro RTX A4000. It's a single slot card and give you room for one more hyper card.

The only SATA drives I'm using are a couple BluRay drives. I think I have them plugged up to the native AMD SATA ports. Not sure if those have hotswap support or not.

It's hard to believe with the rest of the specs on this board that ASUS would pick an old and crappy ASMedia chip. Manual indicates it is a ASM1061.

If you trust the WHQL signing for protection, you can try the drivers from:

https://www.station-drivers.com/ind...ller-Drivers-Version-3.2.3.0-WHQL/lang,en-gb/

I have not used those. You might also find a different manufacture with drivers for it. Maybe SuperMicro or ASRockRack.

There are some links post in the ROG Asus forum. Not sure I would trust these as much. Links appear to be on Mega. https://rog.asus.com/forum/showthread.php?118061-DRIVERS-ASMedia-SATA-USB

Possible OEM addin card driver links: https://www.techpowerup.com/forums/...d-firmware-for-asmedia-asm106x-series.264571/
 
The storage spaces is because microsoft can't do chunk and column calculations worth a damn. You have to do them by hand. Give me a moment, I'll dig up the commands... it's fucking stupid.
 
New-VirtualDisk -StoragePoolFriendlyName "YourStoragePoolNameHere" -FriendlyName "AFriendlyName" -NumberOfColumns 5 -Interleave 128KB -ResiliencySettingName Parity -UseMaximumSize

When you format NTFS, set the allocation size to 512K (4x the interleave, since you have 4 data disks and 1 parity by the way storage spaces does calculation). Overhead will be 20% (4+1).

Basically, create the storage space, but do NOT create the virtual disk (or delete it after you've created it), then use the above powershell command to create the disk, and disk management to format it as specified. You could also use 256k and 1MB for interleave / allocation.
 
Nice build! Kind of insane. Thanks for posting the details.

Depending on your needs, you could change out the GPU for a Quadro RTX A4000. It's a single slot card and give you room for one more hyper card.

The only SATA drives I'm using are a couple BluRay drives. I think I have them plugged up to the native AMD SATA ports. Not sure if those have hotswap support or not.

It's hard to believe with the rest of the specs on this board that ASUS would pick an old and crappy ASMedia chip. Manual indicates it is a ASM1061.

If you trust the WHQL signing for protection, you can try the drivers from:

https://www.station-drivers.com/ind...ller-Drivers-Version-3.2.3.0-WHQL/lang,en-gb/

I have not used those. You might also find a different manufacture with drivers for it. Maybe SuperMicro or ASRockRack.

There are some links post in the ROG Asus forum. Not sure I would trust these as much. Links appear to be on Mega. https://rog.asus.com/forum/showthread.php?118061-DRIVERS-ASMedia-SATA-USB

Possible OEM addin card driver links: https://www.techpowerup.com/forums/...d-firmware-for-asmedia-asm106x-series.264571/
Thanks man!

I couldn't get the first link to download, and I think I'd already tried the other two, but I made sure I had my mother on speed dial, put on my full body condom, and tried this one...
https://oemdrivers.com/chipset-asmedia-106x-sata-controller-driver

...and that seemed to work. I don't have the "Eject [drive name]" thing in the system tray, but I can insert and remove the flight disks now, and they get detected, which is good enough. I totally agree though, this seems pretty jank for a motherboard that costs this much.

I considered a Quadro card, but I didn't want to have a monster card in there with a blower cooler. There's a pretty good chance that someone, possibly even me, is going to have to sleep in the same room as this thing while it's running, so I wanted to at least try to minimize noise. Furthermore, I don't think the extra memory and features will really help us. The 3080 Ti is more than enough as it is.

New-VirtualDisk -StoragePoolFriendlyName "YourStoragePoolNameHere" -FriendlyName "AFriendlyName" -NumberOfColumns 5 -Interleave 128KB -ResiliencySettingName Parity -UseMaximumSize

When you format NTFS, set the allocation size to 512K (4x the interleave, since you have 4 data disks and 1 parity by the way storage spaces does calculation). Overhead will be 20% (4+1).

Basically, create the storage space, but do NOT create the virtual disk (or delete it after you've created it), then use the above powershell command to create the disk, and disk management to format it as specified. You could also use 256k and 1MB for interleave / allocation.
Thanks, this worked a lot better. Figuring out how to get better control of it via that command line had been on my to-do list for a while; you probably saved me hours of pulling my little remaining hair out.
 
Pretty wild build! But I'm surprised you didn't look at some used enterprise gear for this use-case. A lot of that stuff is already set up for the number of drives you (originally) needed and whatnot. I think the only that you would have had to get an option for would have been the gpu as most servers don't have the power leads without an option.
 
Glad I could help! Figuring that out (10MB/s write perf with defaults... ugh) took me a solid day. BTW, allocation size sets the minimum size of a file - my command gives you a minimum 512k file. Most of what I deal with is large (4-40G), so this is fine. If you're doing a lot of small files, you might want to do 64/256 or something else (basically, allocation is always 4x interleave for a 5 column, which is most effective for space utilization).
 
Pretty wild build! But I'm surprised you didn't look at some used enterprise gear for this use-case. A lot of that stuff is already set up for the number of drives you (originally) needed and whatnot. I think the only that you would have had to get an option for would have been the gpu as most servers don't have the power leads without an option.
Oh, I looked. The trouble was that I couldn't find a product we could just buy that satisfied all of the criteria at the same time. It needed to be:
* Rack mountable(to fit in the Pelican rack) or a natively rugged case <= This was the hard part
* No part greater than 4U (so that weight of any one component wasn't excessive)
* Compatible with a PCI-E graphics card
* One single system, not a NAS + clients
* Cheap-ish
* Enough capacity
* Maybe one other thing? I forget now

I found lots of options that satisfied 2/3 of those criteria, but no one seemed to offer exactly what we needed, and it really did need to be exactly what we needed.
Glad I could help! Figuring that out (10MB/s write perf with defaults... ugh) took me a solid day. BTW, allocation size sets the minimum size of a file - my command gives you a minimum 512k file. Most of what I deal with is large (4-40G), so this is fine. If you're doing a lot of small files, you might want to do 64/256 or something else (basically, allocation is always 4x interleave for a 5 column, which is most effective for space utilization).
Yeah, I learned about that a few years ago when I tried to build an ArcGIS tile cache for the entire world, which produces tons and tons of mostly empty .png files. Theoretically, it should only be a few GB, but because of the allocation size I had on that volume, it totally filled up the drive. Thank goodness ESRI (the Microsoft of geography software) eventually added enough features to the tile cache that you don't have to do that boneheaded crap anymore.
 
That is a really nice build. Your post came up in a search and I'm looking to do the same build, lol. I'm planning on 16 drives with 4 cards and I was worried about some hardware configuration gotchas, but it sounds like everything went smoothly. One thing I'm curious about is the benchmark results. Is the Crystalmark using a source benchmark file on a SATA SSD and not the NVME drives? I'm thinking the speed results look a bit lower than expected, especially for the write. How is the volume configured? Is it in a RAID 0 or some other configuration?
 
That is a really nice build. Your post came up in a search and I'm looking to do the same build, lol. I'm planning on 16 drives with 4 cards and I was worried about some hardware configuration gotchas, but it sounds like everything went smoothly. One thing I'm curious about is the benchmark results. Is the Crystalmark using a source benchmark file on a SATA SSD and not the NVME drives? I'm thinking the speed results look a bit lower than expected, especially for the write. How is the volume configured? Is it in a RAID 0 or some other configuration?
I believe what Crystal Diskmark does is generate the data it writes to the disk programmatically, as opposed to reading a file, which would limit the test to whatever the read speed of the source disk is.

The volume in that test was configured using the default Windows Storage Spaces configuration for 10 Western Digital SN850s in "parity" mode. It's similar to a RAID 5 conceptually, but the implementation is purely software.

I'm curious - what is your use case? Mine is storage and processing of airborne lidar data in the field.
 
I believe what Crystal Diskmark does is generate the data it writes to the disk programmatically, as opposed to reading a file, which would limit the test to whatever the read speed of the source disk is.

The volume in that test was configured using the default Windows Storage Spaces configuration for 10 Western Digital SN850s in "parity" mode. It's similar to a RAID 5 conceptually, but the implementation is purely software.

I'm curious - what is your use case? Mine is storage and processing of airborne lidar data in the field.

I'm building a new software development machine. It has been since 2011 since I upgraded, lol. My current project has 1.5TB of spacial data that I need to extract, transform and insert into a database for an app. I also want to host all my virtual machines on the RAID volume.

That SEQ1M write speed is what one drive by itself would get, not 20 NVME drives in any form of RAID. The read speed would be expected with just 4 RAID 0 drives. It would make sense to see if there is something wrong or maybe it is Storage Spaces being the bottle neck.

If it isn't too late, drop the Storage Spaces as a test and in diskmgmt.msc, create a RAID 0 in there. You would delete all the partitions first and create a new dynamic volume and it will let you pick which RAID level to create and benchmark it as a test. I think you should get north of 20GB/s on both read and write for SEQ1M test. That is a bad ass setup you have and I think Storage Spaces is wasting all the effort of having 5 loaded Hyper cards working together to create a big & fast volume. If you choose to do that test, post back the results.
 
I'm building a new software development machine. It has been since 2011 since I upgraded, lol. My current project has 1.5TB of spacial data that I need to extract, transform and insert into a database for an app. I also want to host all my virtual machines on the RAID volume.

That SEQ1M write speed is what one drive by itself would get, not 20 NVME drives in any form of RAID. The read speed would be expected with just 4 RAID 0 drives. It would make sense to see if there is something wrong or maybe it is Storage Spaces being the bottle neck.

If it isn't too late, drop the Storage Spaces as a test and in diskmgmt.msc, create a RAID 0 in there. You would delete all the partitions first and create a new dynamic volume and it will let you pick which RAID level to create and benchmark it as a test. I think you should get north of 20GB/s on both read and write for SEQ1M test. That is a bad ass setup you have and I think Storage Spaces is wasting all the effort of having 5 loaded Hyper cards working together to create a big & fast volume. If you choose to do that test, post back the results.
A big part of that I suspect may have been that he used the default chunk sizes. Which storage spaces straight out does wrong (all misaligned). I’d be curious to run it again after my change I suggested. It should scream then.
 
A big part of that I suspect may have been that he used the default chunk sizes. Which storage spaces straight out does wrong (all misaligned). I’d be curious to run it again after my change I suggested. It should scream then.
I think I used the settings you suggested exactly, except the drives are actually arranged into two pools of 10 drives each, not one giant one. Anyway, here's the result from running the test on one of them.

I think that should be fine. The point of this isn't really about the speed, but the need to have two great big storage volumes that are as un-killable as possible. Having them be fast is useful, but the longest-running operation is the copy from the flight disks to the arrays, and that's limited to the sustained performance of one SATA SSD. Anything faster than that one disk can deliver is mostly wasted, although I guess it does help somewhat once that processing starts, but I think the CPU is the limiting factor there.
 

Attachments

  • crystal2.png
    crystal2.png
    22.7 KB · Views: 0
There's definitely overhead in windows for software raid setups, even in a raid-0 is pretty hefty too.
I have a few 3GB/sec read/write ioDrives in my setup but in a raid-0 stripe I only get about 2GB/sec per drive out of them.
I can run parallel benchmarks on all 3 and get 2.7GB/sec each just fine, so not a CPU limitation.

Definitely some overhead there.
 
Last edited:
There's definitely overhead in windows for software raid setups, even in a raid-0 is pretty hefty too.
I have a few 3GB/sec read/write ioDrives in my setup but in a raid-0 stripe I only get about 2GB/sec per drive out of them.
I can run parallel benchmarks on all 3 and get 2.7GB/sec each just fine, so not a CPU limitation.

Definitely some overhead there.
I'm surprised you get less speed out of the stripe than you do with a single drive. Queue depth matters too. With Atto's benchmark you can increase the queue size and that increases the MB/s.
 
I'm surprised you get less speed out of the stripe than you do with a single drive. Queue depth matters too. With Atto's benchmark you can increase the queue size and that increases the MB/s.
No, I get more speed than single drive, just 2GB/sec per drive, so 3 drives gets me 6GB/sec.

If I build this in linux with mdraid, I can get 8GB/sec since I can get the full 2.7GB/sec per drive.
 
Thanks guys!

I ended up having to basically YOLO this and just assume that it was going to work, since we were running out of time to get the components ordered and the system assembled. Here's what I ended up with:

Threadripper Pro 3955
Asus WRX80 Sage Mouthful Wifi
128GB Corsair Vengeance 3200MHz
21x Western Digital SN850 2TB SSDs (plus two more, as spares)
5x Asus Hyper M.2 breakout cards (plus one more as a spare)
EVGA 3080 Ti - The "XC" reference model, which is a 2.25 slot card
Seasonic Prime 1300w power supply
Noctua NH-U9S STR4

The good news is, it works pretty much exactly as I expected it to, with the only real snag being that, when doing what appears to be the equivalent of RAID 5 with Windows Storage Spaces, you lose way more than one drive's worth of usable capacity to parity, as if it's using two or three for the parity. I may need to spend some more time with that, to see if I can adjust it.

The PCI-E bifurcation works pretty much exactly as one would expect, though. There's a settings page in the BIOS that allows you to configure how each of the seven PCI-E slots gets divided up. The default is a normal x16 slot, but you also have the option of x8/x8 and what Asus refers to as "NVME Raid Mode," which is actually just x4/x4/x4/x4. This allows you to run the NVME drives as individual drives. It's important, though, not to enable the SATA "NVME Raid Mode" option, which I gather is type of "hardware" raid, because that limits you to ten NVME drives in the entire system, regardless of whether they're participating in a RAID or not.

Anyway, the parts arrived late last week, so I got to work on Monday assembling the system. Here it is mid-assembly.
View attachment 484036

I had to unpack all 21 SSDs by hand. It produced a freaking mountain of packaging waste. I probably should have tried harder to find a way to buy them in bulk, but it's not like I needed that many.
View attachment 484037View attachment 484038

Here's the first pass at it, all the way assembled. I had planned to use five of the "Hyper" cards, but the graphics card was too big, and I couldn't fit the fifth. For the purpose of making sure all the parts worked, though, this was good enough.
View attachment 484042

I really wanted to get that fifth expansion card in there, though, so I brought the system home yesterday evening, took it apart, and did some rearranging. The idea being that, since this board has seven PCI-E x16 slots, the graphics card doesn't necessarily need to be in the top one. In fact, it could just as easily be in #6, which would leave 1-5 free for my expansion cards. That left me with a problem, though. The card would physically fit in there like that, but there's very little space between it and the case wall for cooling air to pass.
View attachment 484047

I don't think we'll use the graphics card all that hard in practice, but it made me uneasy. I at least need it to not overheat. The solution was obvious...

...old skool case mod! I haven't cut a hole in case since high school.

View attachment 484048View attachment 484049

And here it is fully assembled with all five cards.
View attachment 484050

And then in the peli-rack. It's heavy as shit now. It has to be at least 100 pounds. Gonna have to tuck my hernia back in before I lift it into the car.
View attachment 484058


Disk performance is... adequate. :D
The limiting factor is likely to be the read speed from the SATA SSDs that the data comes from, so while I suspect that I could get these numbers higher by adjusting things, I doubt the speed of the NVME arrays will ever really be a limiting factor.
View attachment 484055
Why not use a dolly to cart that thing around? I used to do that when my shoulder was out of action for almost a year. Some dollies are fancy and make loading into conveyances easier, you could also use simple machines. I built one for this to load 90lb dogs into my mom's v8 suvs in the 90s while inside their crates in the 3rd grade. It consisted of a 6x6" post x2 and a 4x4 post with a pulley and a winch.
 
Last edited:
Why not use a dolly to cart that thing around? I used to do that when my shoulder was out of action for almost a year. Some dollies are fancy and make loading into conveyances easier, you could also use simple machines. I built one for this to load 90lb dogs into my mom's v8 suvs in the 90s while inside their crates in the 3rd grade. It consisted of a 6x6" post x2 and a 4x4 post with a pulley and a winch.
The Peli-rack has wheels, actually, like a piece of luggage, when the lids are on it. Unfortunately, when I took it home that weekend, I didn't bring the lids, because they're a pain in the ass to put on and take off by yourself. I guess I could have used my engine crane to lift it into the car, but it's not like it's THAT heavy.

In most of the places we go, we use the hotel luggage carts to move it around. Now that it's only one case, we'll probably use the built-in wheels more often.
 
The Peli-rack has wheels, actually, like a piece of luggage, when the lids are on it. Unfortunately, when I took it home that weekend, I didn't bring the lids, because they're a pain in the ass to put on and take off by yourself. I guess I could have used my engine crane to lift it into the car, but it's not like it's THAT heavy.

In most of the places we go, we use the hotel luggage carts to move it around. Now that it's only one case, we'll probably use the built-in wheels more often.
I have a couple of their mobile armory cases but nothing quite like those https://www.pelican.com/us/en/professional/rack-mount-cases/ do you mean these? We have a loading dock at camp that is easy for stuff like this, depends on the kind of vehicle you're backing up to it with though and what is being loaded, eg fit dogs can leap down into gators and stuff and vice versa.
 
Last edited:
I have a couple of their mobile armory cases but nothing quite like those https://www.pelican.com/us/en/professional/rack-mount-cases/ do you mean these?
The ones we have are Pelican-Hardigg "BlackBox" cases, which used to be a separate line from the Pelican branded ones. The rackmount case are made by Hardigg, which used to be a separate company until it was purchased by Pelican at some point. They're different from a normal Pelican case, with a blow-molded shell, versus the injection molded Pelican ones. Hardigg also makes frakkin huge ones for other things. I have a 4' by 4' by 3.5' one for our actual airborne lidar system that I bought back in 2012 because the plywood box it came in was embarrassingly shitty.

Regular Pelican cases are super common in the sciences, and I'd imagine any other industry that does field work with delicate and expensive equipment. The Pelican 1600 series is actually the OE packaging that a lot of professional grade survey equipment comes in. Trimble GPS receivers generally come in a custom-made yellow one, with nifty pockets for the batteries, monopod spike, et cetera inside.
 
Glad to see someone got so many breakout cards working in this mobo, I've had a lot of issues getting more than one card to work, whereby the motherboard is reluctant to boot the OS, or wont boot past cpu/mem test at all as soon as card 2 is added in any slot. I've checked all the gotchas in your logs above, I've got some questions I'd be grateful if you would answer to help me keep troubleshooting my own issues filling this mobo up with breakout cards.

- Which bios version are you on?
- What memory are you running?
- Have you populated any of the onboard m.2 slots? (If I recall correctly there are three and I have populated all three and one is being used for the OS)
- Have you plugged in the additional power sockets on the side of the board? I haven't been able to since they sit right up against the side of the 4U case the board is in

The only other difference I have in this setup is the existing riser card (which works fine on its own) is the same as yours (asus), whereas the additional ones I am adding are gigabyte aorus riser cards. I've ordered a couple more asus breakout cards to eliminate that potential issue although not with high hopes of a fix.

One last observation - if i boot with a second breakout installed, but dont turn on bifurcation on that lane, it boots fine with one drive visible. The gremlins appear as soon as bifurcation is enabled on a second lane.
 
Yeah, I'd be surprised if the onboard m.2 were at issue. Manual states:

"The U.2_1 slot shares bandwidth with M.2_2, U.2_2 slot shares bandwidth
with M.2_3. When M.2_2 slot is being populated, U.2_1 slot will be disabled.
When M.2_3 slot is being populated, U.2_2 slot will be disabled."

That's the only shared bandwidth I've seen mentioned on this board.

That said, I'm not using the onboard m.2 slots. I prefer the active cooling on the AIC.

I've run 2 AIC cards (both ASUS) with no issues.

Things I would look at:
1. Are the NVMEs on the AICs blank?
2. Are you mixing Gen 3 and Gen 4 NVMEs? I am, but I was suspicious if the bifurcation / BIOS would support it.
3. As Iopoetve and SamirD have stated, you are probably going to have to hookup those PCIe power connections.
4. Have you enabled any Compatibility Mode setting in the BIOS? I can not remember if this board has one, but if it does that can screw with the NVMe bootup.
5. Which slots are you using? I've used 6 and 7. Currently, I'm only using Slot 7.

Maybe post your system specs to give us a better idea of what could be in play.
 
Glad to see someone got so many breakout cards working in this mobo, I've had a lot of issues getting more than one card to work, whereby the motherboard is reluctant to boot the OS, or wont boot past cpu/mem test at all as soon as card 2 is added in any slot. I've checked all the gotchas in your logs above, I've got some questions I'd be grateful if you would answer to help me keep troubleshooting my own issues filling this mobo up with breakout cards.

- Which bios version are you on?
- What memory are you running?
- Have you populated any of the onboard m.2 slots? (If I recall correctly there are three and I have populated all three and one is being used for the OS)
- Have you plugged in the additional power sockets on the side of the board? I haven't been able to since they sit right up against the side of the 4U case the board is in

The only other difference I have in this setup is the existing riser card (which works fine on its own) is the same as yours (asus), whereas the additional ones I am adding are gigabyte aorus riser cards. I've ordered a couple more asus breakout cards to eliminate that potential issue although not with high hopes of a fix.

One last observation - if i boot with a second breakout installed, but dont turn on bifurcation on that lane, it boots fine with one drive visible. The gremlins appear as soon as bifurcation is enabled on a second lane.

It's in use currently, so I can't very easily check which BIOS version it's running, but it's whatever version was on it when it came to us. It worked out of the box, so I never felt the need to update it.

The memory is Corsair Vengeance LPX in one kit of 8 by 16GB sticks. I don't have a link to the exact part handy, but it's the 3200mhz speed. The system refuses to post if I try to enable XMP, so I just run it at 2133, or whatever the default is, which is plenty fast for our purposes.

I'm currently only running an SSD in one of the on board M.2 slots (for the OS), but it did work if I plugged SSDs into all three.

I couldn't fit a power connector into the auxiliary power connectors at the bottom of the board either, since the case wall is in the way. I could see that becoming necessarily if you were running several graphics cards, but it doesn't seem to be a problem with SSDs and breakout cards - mine all work fine.

I'd echo what others have said, and suggest you post the full list of hardware you're running. Maybe the problem is actually some other part of the system that's not playing nice? One thing to note is that I'm using 21 of the same SSD, as opposed to mixing and matching. No idea how tolerant it is of using different drives, although I would think it should support that... because why wouldn't it?
 
Yeah, TR boxes are... finicky at 128G plus until you fiddle voltages. Every one I've touched is that way.
 
Just wanted to make sure and get back to this as I finally had the chance to try fixing this issue. Thanks everyone for all the suggestions.

Installing more of the hyperx riser cards, and removing the gigabyte aorus risers that seemed to cause malfunctions, resolved the issues. I installed two more hyperx cards in this sage workstation with zero drama. Same nvme model as in the gigabye risers.

I have another near-identical workstation with two hyperx cards in it already; I will make an attempt to use the gigabyte risers in it as a test, but for now its seems like either i have a fault with those specific gigabyte risers or a compatibility issue in general with the gigabyte risers. They work fine in a different, older workstation mobo so I am leaning towards a compatability issue at the moment.
 
FYI both the v1 and v2 of this board received a decent couple of BIOS updates in 2023, and the most recent a few days ago also finally brought Q-Fan control into the BIOS menu, which means not having to log into the BMC interface just to control the case fans - which was somewhat annoying considering the chipset is not supported by AMD for server usage to begin with.
 
Last edited:
Heads up that both the Rev1 and Rev2 of this board received a decent couple of BIOS updates in 2023, and the most recent a few days ago also finally brought Q-Fan control into the BIOS menu, which means not having to log into the BMC interface just to control the case fans - which was fairly annoying considering it's a workstation board.
I'm waiting for storm peak. :D
 
I'm waiting for storm peak. :D
No doubt, but 5-figures for the most base level configurations is what to expect, and who knows if they do the Lenovo exclusivity lock-in thing again for the lower core count SKU's as they did for TR Pro. From a HEDT/Enthusiast/Prosumer standpoint, Storm Peak and Intel W790E are cool in that they exist, but it's hefty early adopter freight for a lot of DDR5 and PCIE5 componentry whose add-in ecosystems are still too green. Exciting for business/enterprise though.

I'm as annoyed with the abandonment of HEDT as everyone, but also recognize the business realities that caused the shift. All that aside, as well as that the timing of buying into a WRX80 system now just feels wrong, there are still some interesting use cases for anyone in need of lots of I/O without resorting to older hardware that's really only good for I/O.

$350 for WRX80-SAGE MB ebay, $999 for 5955WX 16-core, a handful of DDR4-3600 modules I had laying around, and I've got 128 PCIe 4.0 lanes, including 7 full x16 PCIe 4.0 slots direct to CPU. There's nothing else I'm aware of that comes close for 128 PCIe 4.0 lanes at this ballpark pricepoint without sacrificing IPC, clockspeeds, IMC, consumer add-in hardware compatibility. You retain workstation/client ability, video editing, ML/AI, lots of VM's, and even gaming. A cheap-ish ebay Epyc MB/CPU/RAM combo can seem attractive at first glance, but lower IPC, inferior IMC and poor handling of memory channels on lower core count CPU's, and all sorts of other subtle hardware-level glitchiness and incompatibility that arise when mixing and matching add-in components.
 
Last edited:
No doubt, but 5-figures for the most base level configurations is what to expect, and who knows if they do the Lenovo exclusivity lock-in thing again for the lower core count SKU's as they did for TR Pro. From a HEDT/Enthusiast/Prosumer standpoint, Storm Peak and Intel W790E are cool in that they exist, but it's hefty early adopter freight for a lot of DDR5 and PCIE5 componentry whose add-in ecosystems are still too green. Exciting for business/enterprise though.

I'm as annoyed with the abandonment of HEDT as everyone, but also recognize the underlying business realities that caused the shift. Putting all that aside, as well as that the timing of someone buying into a WRX80 system now just feels wrong, there are still some compelling use cases for anyone in need of lots of I/O without resorting to hardware that's really only good for I/O.

In my case: $350 for an ebay ASUS SAGE, $999 for a 5955WX 16-core CPU, a handful of DDR4-3600 modules I had laying around, and I've got 128 PCIe 4.0 lanes, including 7 full x16 PCIe 4 slots wired to the CPU. There's nothing else I'm aware of that comes close in terms of PCIe4 slots per dollar without sacrificing IPC and clockspeeds - meaning retaining workstation tasks, things like video editing, lots of VM's, and even gaming. A cheap ebay Epyc combo can seem attractive, but comes with lower IPC, worse IMC and handling of memory channels on lower core count CPU's, and all sorts of other subtle hardware-level glitchiness and incompatibility that can arise when mixing and matching add-in components.
That’s why I’m hoping storm peak HEDT is good. I use a lot of slots over time, and that would be the perfect combination for exactly why you said. I’m hoping in a few months DDR5 has come down a bit, and that it won’t be that bad (and I doubt exclusivity on HEDT; possibly on pro).
 
  • Like
Reactions: DPI
like this
I have a similar system, but with 3 of the cards. I've found that 8 get AER correctable errors slowly, steadily from the controllers on the NVMe drives if they're plugged into slots 1-4. If they're plugged into slots 5-7 they get thousands of errors to the point of the system being so highly latent to any input, it's unusable. If you're using Windows, would you check the Event Viewer -> Windows Logs -> System and see if you have WHEA errors from the drives' controllers?
 
No errors in the event log on my system.

That sounds like a bitch of a problem. I'd be inclined to suspect the motherboard, but I definitely don't envy you having to hash this out with Asus support. I can only imagine what a pain they'd be about it.

Have you tried removing the SSDs from the add in boards one at a time, or testing them on another system, to rule out maybe one of them being the problem? I assume you've done the obvious, like update the BIOS and install the current chipset drivers, as opposed to just rolling with whatever Windows comes with?
 
No errors in the event log on my system.

That sounds like a bitch of a problem. I'd be inclined to suspect the motherboard, but I definitely don't envy you having to hash this out with Asus support. I can only imagine what a pain they'd be about it.

Have you tried removing the SSDs from the add in boards one at a time, or testing them on another system, to rule out maybe one of them being the problem? I assume you've done the obvious, like update the BIOS and install the current chipset drivers, as opposed to just rolling with whatever Windows comes with?
Yeah, I've done all of the usual things. But haven't had the time to swap individual drives. This is a common occurrence, a lot of people are experiencing it.

I've been through 3 motherboards, they all do it.

I did also open a case with SK Hynix since all of the SSDs are the same exact model.

They have a really non-existent customer service department, so that's been interesting.
 
Back
Top