Gaming PC... designed around storage??

Phrozt

Weaksauce
Joined
Apr 20, 2007
Messages
102
I really haven’t done a ton of research for this build, and I’m doing something different this time. I’ve used Intel/Nvidia for every build, but not because I’m a fanboy. I usually buy high middle tier or low high tier, and AMD has just never had any offerings in that range when I’m looking to build. However, as I’m sure you’re all aware, this is quite a time to be alive, because AMD is kicking the pants off of Intel w/their latest offerings. I’m definitely going for a 5900x for the CPU, and I’ll most likely go for an AMD GPU as well, but I’m going to wait for the benchmarks first. If they aren't as promised, I'll go with a 3080.

The other interesting thing about this build, is the storage arrangement. I do a couple interesting things in that department. To begin, from all my builds previous… I really don't delete many things, and I have a LOT of games loaded, so I use up a huge amount of storage. Also, on my last build, I tried out RAID5. Now, while I was WILDLY optimistic w/performance (RAID5 on platters still isn’t remotely comparable to a single SSD), I did get used to the idea of having redundancy. I also was very fortuitous in this department, because out of all my previous builds, I never had a HDD fail, but the first time I tried RAID5, one of the disks failed – which was no problem! After having that happen, I really don’t think I want to go back to a single disk for all my files and stuff. The last note in the storage section, is that I’ve done a bit of streaming, but even when I’m not, I have lowkey.gg or some other capture program so I can make clips. The problem is that this is very disk intensive, and it fights for disk IO w/anything else I’m running – namely the game that I’m running that I would want to capture things from in the first place!

Here's my current storage layout:
C: 91GB free of 464GB (500GB SSD)
D (Downloads): 159GB free of 200GB
F (Files): 146GB free of 200GB
G (Games): 473GB free of 4TB
P (Apps): 444GB free of 500GB
R (Archive): 134 GB free of 400 GB
X (Desktop): 192 GB free of 200 GB

My C drive is an SSD, and the reason I’ve used a lot of it is because, in addition to some apps that will only allow installs on the C: drive, I’ve also ended up putting my most played games on it for performance. I actually have D, F, and X bound to their respective functions in the system. I like this, because when I want to go to my documents, I just type F: in the address bar and there I am! Same w/downloads/desktop. This also REALLY helps get me back up and running after a new build/restore, because all my files are exactly where I want them after a bit of configuration. D, F, G, P, R, X are all part of the RAID5 array, just split into logical drives. Currently, I’m using R for the temp storage for lowkey/obs… but it’s still part of the same RAID array, so it still fights for IO. The plan for the new build is m.2/NVMe for the C, a RAID5 of at least the size you see here for everything else…. And then m.2/NVMe to dedicate to video encoding processes.

tl;dr – This is a gaming build w/a unique storage situation. On to the parts list, where I have several questions.



CPU – Ryzen 9 5900x – pretty firm on this

GPU - Probably a Radeon RX 6900XT. Possibly a RX 6800 XT, depending on the actual performance difference, but if I’m in that range, getting a RTX 3080 might be just as much of a possibility depending on deals/etc. I do know about the smart tech or whatever where the CPU and GPU from AMD are supposed to be able to talk to each other for better performance, but again, that all hinges on third party benchmarks (let’s be serious, I’m primarily talking about JayzTwoCents).

Mobo – I don’t know? Again, never having built an AMD machine, I’m WAY out of my research zone for this, but I’m pretty sure it will be an X570 board, and most likely the AUS ROG Strix X570-E. Thoughts on this would be appreciated. One of the things I need to make sure is that the board will support my storage strategy, so I would need at least 2 NVMe, and plenty of SATA connections after that.

Cooling – I have never done a water build… don’t ever really want to. I’m pretty impressed w/the fact that Noctua FINALLY got away from their aged beige color scheme, so I will most likely go with a NH-D15

Memory – Again, I haven’t done much research, but I doubt there’s that much difference, so I’ll probably look for a good BF deal, and I’ll be getting 32GB, unless you guys shout at me.

PSU – Anything 1000+ watts from Silverstone or Corsair – will look for some deal on BF

Case – Another area I haven’t done a lot of research. My primary concern here would be great airflow and proper room/accommodations for 2 NVMe/m.2s, and 3-4+ SSDs.

Storage – Here’s where we get into the fun, and I’m hoping you have good knowledge/advice on performance, problems, etc to help guide me away from noob mistakes and into a solid configuration.

C – NVMe or m.2, probably 1TB, but potentially 500GB. As you can see from the above, I currently have a 500GB C, and while a lot of space is used and I have to fight to free space, that’s mostly because of games, which should be solved if the RAID5 option is chosen wisely. The reason I’m flirting w/a 1TB option is because I have a friend who might be able to get me a good deal on a 1TB WD black NVMe. That being said, I’m not tied to anything at the moment, so I’m very open to your thoughts on this.

Video/Encoding – Again NVMe or m.2. Again, 500GB or 1TB depending on deals. I’m less worried about the performance of this one, because the major performance upgrade will just be getting the video encoding to work w/a separate drive. I could even go down to an SSD for this option. I thought about doing an elgato device or something similar, but I don’t want to use this just for streaming. Even if/when I do other video/music editing on this drive, final products won’t necessarily live here, so I don’t necessarily need to carve out a huge chunk of storage space for this attribute.

Everything else – Here’s where we get back to RAID5, and I’m shooting for at least the size I had before (as you can see, I’m already pushing the limits of my games drive, and that’s with some of the games being offloaded to the C). So that’s 3x3TB drives minimum, and SSD at least. The thing is, in my first RAID5 foray, I completely underestimated the performance, and I don’t really want to do that again. I know there are a plethora of differences in SSDs when it comes to speed, especially when you tie that in to size. I know there are some who warn about using SSDs for RAID, because of the fact there are is so much IO w/the parity and everything, but I think most of those fears are assuaged by pretty much anything currently available on the market. That being said, this could be another area where I’m underestimating the potential risks. Does anyone have knowledge/thoughts on using SSDs in RAID? Also, because of the need for multiple drives, and because of how much large capacity SSDs cost in the first place, this is where I really have to take price into consideration, because things can scale up quite quickly. I had previously stated that I need 3x3TB drives… but the cheapest option right now is a 4TB drive, and actually there really aren’t any options at the 3TB range that are affordable. So really, I’m looking at 3x4TB drives.

The cheapest option is the Samsung 870 QVO. Then you’ve got a couple other Samsung options, WD, SanDisk and some Kingston options. I have 0 knowledge of the reliability of Kingston at the SSD level. However, after that, you get right into m.2 options. I’m pretty sure m.2 would be better, but you’ve got to have a mobo that can handle that.. right? And while there are some mobos out there that handle 4-6 m.2s, part of the idea of RAID5 is scalability, so I don’t really want to be bound by a number that low (considering my C and video drives would also be counted in there), even though realistically I wouldn’t be doing more than 4 drives in my RAID, especially since they’re already bumped up to 4TB instead of 3.

However, the other option is to go lower, and get more than 3x2TB drives. So, if I’m shooting for 6TB minimum, and RAID5 is n-1, then I would need at least 4 SSDs to get to my current storage size. The 2TB drives are roughly half of the price of the 4TB drives in their respective categories, but I could even go to 5x2TB drives, have more space than I currently have, and still spend under what it would cost for 3x4TB drives. However, once again, we get back into the discussion of speed/performance of larger drives vs smaller drives. I’m way out of my league here, because the only thing I really know about SSDs…. And m.2…. and NVMe… is that there are SUCH huge differences in performance over a wide array of factors. Also, if I push to 5x2TB drive, then add in the C and video drives, and I do m.2/NVMe, then I bust out of the limit of most mobos. Course.. I could drop the C and video to SSDs and then use all m.2/NVMe slots for the RAID.....



Ok.. so I posted this on a different forum... someone mentioned that AMD boards don't have RAID, but I could just use a RAID controller. As I was looking those up, however, I also saw that apparently RAID5 is deprecated?? Even though I'm currently using it on this PC and in my NAS?

Now I'm thinking I might need to rethink my whole storage system *outside* of RAID5...
 
Last edited:
Now I'm thinking I might need to rethink my whole storage system *outside* of RAID5...
I feel so spending SSD drive for large amount backup do seem costly.

Why not ssd on the system for what you need, back up on regular hard drive on the NAS ?

What you describe does not seem to need always on / redundancy raid can provide.
 
The thing is, I have valuable files in multiple places. The F drive has a large majority of my valuable files, but so does the R drive, and the P drive has a lot of my code. Also, while it's absolutely true that I can just re-download games, there are config files, screenshots, mods, etc that are not as easy to replace.
 
The thing is, I have valuable files in multiple places. The F drive has a large majority of my valuable files, but so does the R drive, and the P drive has a lot of my code. Also, while it's absolutely true that I can just re-download games, there are config files, screenshots, mods, etc that are not as easy to replace.
You don't have valuable files in multiple places. You have valuable files all in the same box. If it's important to you, it needs to be backed up in at least two of these locations:
  • Another computer or NAS in the house
  • Offsite on an external drive (I keep an encrypted backup drive in my locker at work)
  • Cloud service
 
For a gaming rig or flat out supreme work station storage speed is going to become a bottleneck in processing speed if it is lagging a process.

AMD has powerful multicore processing for game loads or any process load. Fast memory controller, HIGH bandwidth PCI express lanes.

For gaming specifically you do want a good CPU and GPU combo. Motherboard is key to getting everything to work together.

I like RAID 0 performance.

I think this PCI-e 4.0 card would work out great for operating system and games/programs https://www.asus.com/us/Motherboard-Accessories/HYPER-M-2-X16-GEN-4-CARD/ 2TB in size 4x M.2 SSD would work for most scenarios I believe.

Then put use motherboard M.2 slots at 1-3TB reliable dedicated fast SSD for project files that work directly in conjunction with the main processing RAID 0.

Use additional SATA SSD storage that is reliable and fast for warehousing valuable DATA that you will not used that much. Reliability in data preservation is key for these data partitions.

I'd do something like above for any modern supreme power gaming or workstation computer if needed.

As needed. Some people may need this type of machine. However having this type of machine available as needed would be nice too.
 
I agree that you are better served, and will spend less money, buy simply buying a few M.2 and SSD drives to meet your storage needs, and adding another drive internal and a third drive external to serve as backups. You can automate the backups with various software options.


As for SSD, I don't know what prices you are eyeing, but heres a 1TB for a good price: https://www.amazon.com/Samsung-Inch...078DPCY3T?smid=ATVPDKIKX0DER&tag=hardfocom-20
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
First, LukeTbk, Grebuloner, Krispy Kritter, and anyone else thinking I'm trying to build a backup solution - let me assure you, I am fully aware that RAID5 is *NOT* a backup solution, and I am not intending it to be. I have a NAS for backup, and I have a few other cloud solutions for very important files. I am not looking for backup solutions in this post. One of the many reasons I'm going for RAID5 here is redundancy, but that is for hardware failure only, so that I *can* back up things to somewhere else if a drive starts failing, and so that I can get back up and going quicker if there is a hardware failure w/out having to start from scratch. Other benefits I'm using RAID5 for besides hardware redundancy: performance gains (I know write will take a hit, but read for gaming is the gain I'm focusing on here), the ability to add more disks, the ability to specify/modify partition sizes.

For a gaming rig or flat out supreme work station storage speed is going to become a bottleneck in processing speed if it is lagging a process.

AMD has powerful multicore processing for game loads or any process load. Fast memory controller, HIGH bandwidth PCI express lanes.

For gaming specifically you do want a good CPU and GPU combo. Motherboard is key to getting everything to work together.

I like RAID 0 performance.

I think this PCI-e 4.0 card would work out great for operating system and games/programs https://www.asus.com/us/Motherboard-Accessories/HYPER-M-2-X16-GEN-4-CARD/ 2TB in size 4x M.2 SSD would work for most scenarios I believe.

Your first few lines describe exactly what I posted that I was looking to solve in my original post, so we're on the same wavelength here!! After posting elsewhere, I became aware that AMD boards do not natively have RAID support, but that also Intel integrated RAID support is pretty crap, so I then was wondering what RAID controller to get, and I really like your suggestion here, thank you! This is a bit new to me though, so I have a few questions - it looks like it has 4 connections for the m.2s, so you plug them directly into this card? Also, in its description, it says, "*M.2 SSD support dependent on CPU and motherboard design. " Does this mean it's still limited by however many m.2s the motherboard allows? Or is this more because of the fact that it's using a PCIe 4.0 slot to run all of this? The other thing it says is that NVMe is supported on the X570 platform, so I'm assuming that I could then use 4 NVMe instead of 4 m.2?

If I did go w/this option, and I'm following you right, you're saying:
- C and video SSDs attached to motherboard
- RAID5, where I will implement my logical drives on the ASUS Hyper card

That does lock me in to only 4 drives total, which puts me back in the 3x4 TB range, or locks me to 6TB if I do 4x2 TB, which is the current size I have now... and am running out of space on. Something I will have to think about, or at least, now that I know these cards exist, maybe I can look for one that will allow for 6 connections. Either way, I think you're helping me make some progress, so thanks again!
 
Do Raid 10, raid 5 has too much over head and SHOULD NOT be used for large TB spinning rust drives, for SSD sure, Raid 5 is fine. But spinning rust Raid 10 or Raid 6, raid 5 is dead and you will lose data on a rebuild over 12TB 100% The strain of a rebuild on raid 5 and even 6 is what kills that 2nd drive and leaves you with nothing or the flipped bit that will happen at 12TB.
 
Last edited:
Do Raid 10, raid 5 has too much over head and SHOULD NOT be used for large TB spinning rust drives, for SSD sure, Raid 5 is fine. But spinning rust Raid 10 or Raid 6, raid 5 is dead and you will lose data on a rebuild over 12TB 100% The strain of a rebuild on raid 5 and even 6 is what kills that 2nd drive and leaves you with nothing or the flipped bit that will happen at 12TB.

I am specifically using ONLY SSD in this build. The machine I'm currently on is a 3x3TB RAID5 array of disc HDDs (I think that's what you mean by rust?). In my OP I mention that I learned that lesson in my current build, so I'm avoiding that in the new build.
 
First, LukeTbk, Grebuloner, Krispy Kritter, and anyone else thinking I'm trying to build a backup solution - let me assure you, I am fully aware that RAID5 is *NOT* a backup solution, and I am not intending it to be. I have a NAS for backup, and I have a few other cloud solutions for very important files. I am not looking for backup solutions in this post. One of the many reasons I'm going for RAID5 here is redundancy, but that is for hardware failure only, so that I *can* back up things to somewhere else if a drive starts failing, and so that I can get back up and going quicker if there is a hardware failure w/out having to start from scratch. Other benefits I'm using RAID5 for besides hardware redundancy: performance gains (I know write will take a hit, but read for gaming is the gain I'm focusing on here), the ability to add more disks, the ability to specify/modify partition sizes.



Your first few lines describe exactly what I posted that I was looking to solve in my original post, so we're on the same wavelength here!! After posting elsewhere, I became aware that AMD boards do not natively have RAID support, but that also Intel integrated RAID support is pretty crap, so I then was wondering what RAID controller to get, and I really like your suggestion here, thank you! This is a bit new to me though, so I have a few questions - it looks like it has 4 connections for the m.2s, so you plug them directly into this card? Also, in its description, it says, "*M.2 SSD support dependent on CPU and motherboard design. " Does this mean it's still limited by however many m.2s the motherboard allows? Or is this more because of the fact that it's using a PCIe 4.0 slot to run all of this? The other thing it says is that NVMe is supported on the X570 platform, so I'm assuming that I could then use 4 NVMe instead of 4 m.2?

If I did go w/this option, and I'm following you right, you're saying:
- C and video SSDs attached to motherboard
- RAID5, where I will implement my logical drives on the ASUS Hyper card

That does lock me in to only 4 drives total, which puts me back in the 3x4 TB range, or locks me to 6TB if I do 4x2 TB, which is the current size I have now... and am running out of space on. Something I will have to think about, or at least, now that I know these cards exist, maybe I can look for one that will allow for 6 connections. Either way, I think you're helping me make some progress, so thanks again!

I'm not sure I've never done it but apparently it's possible with the card. I would use that card for RAID 0 on all 4 slots. The use other storage drive connections on a motherboard using designed interface not part of the RAID 0.
 
Have you seen the newest CP2077 requirements? I would spend less on the processor, more on the graphics card.
E2773B1F-D56A-4F5B-92D5-8F0D7F29D8F2.jpeg
 
You actually sound like you're building something similar to my setup.

I'm on a 3900X in an Asrock Taichi X570, but I have 6 mechanical hard drives (totalling 40TB) and 4 NVME SSDs (totalling about 3TB.)

I went with the Asrock Taichi X570 specifically because it had 3x nvme slots and 8 SATA ports, i'm planning on adding an additional mechanical hard drives by the end of the year (if I get income again.) My 4th NVME is in a cheap pci-e riser.
(512GB PCI-E 4.0 NVME for OS + Plex server data cache (mostly empty), 1TB NVME for game cache, 1TB NVME for Microsoft Game Pass*, 512GB NVME for hosting my minecraft server as well as the plex database cache and as a scratch drive for video transcoding.

I would look into Cache technology, my gaming drive is actually an 8TB spinner but I use primocache to have it cache on a 1TB NVME SSD + 8GB of RAMcache and it loads things even faster than a pure NVME SSD does.
I have *multiple* games that are over 100GB installed, a few that are approaching 200GB. For me it was easier to just keep them all on a hard drive than deal with the time it takes to redownload them. Even with my 1Gb internet it's simply not convenient to redownload them all - furthermore, moving games around from disk to disk was frustrating me, which is why I went with a cache a few years back and never looked back.

If you're going purely SSD then getting a case that can hold that many hard drives is MUCH easier than if you're going with a bunch of spinners. The amount of heat that my 6 HDDs output is not insignificant and so does the amount of room they took up. You can usually fit a few 2.5inch SSds behind the motherboard or on the floor of the case of wherever the hell you want. They don't care about orientation.
I went purely air cooled because for my scenario I require my case to be as absolutely silent as possible and this makes that work, but a good water cooled setup would probably be superior.

*If you're going to use xbox gamepass and you have a lot of hard drives for the love of sanity make sure that you put your gamepass games on their own hard drive. Microsoft installs games using these hidden virtual drives and its the biggest fucking clusterfuck in the world if anything about any of them looks out of place.

I don't have an onsite backup for all of my stuff, I clone most of my important data to two different cloud backups and I do have an external HDD that I use as backup for a lot of my more important stuff. I'd say that 35+TB of my data isn't worth backing up. (Media and game files.) but it would be a pain in the ass to recover, but not more than the cost of acquiring the hard drives required to do so.

there are config files, screenshots, mods, etc that are not as easy to replace.
Yeah, mods and whatnot aren't easy to replace (Trust me, i've tried.) but you can absolutely backup config and screenshots to a single location. At least with STEAM the majority of this is automatically saved on the cloud as it is. Most save files put themselves under USER on C these days anyway.
 
Last edited:
AMD doesn't support RAID 5 via its chipsets for SATA or NVMe drives. AMD also only supports 64k and 128k stripe sizes. It does NOTHING else. You do not have the same level of versatility or control you have with Intel chipsets which do support RAID 5 and various stripe sizes. Having said that, RAID 5 via a motherboard is a bad idea. The performance hit is too severe. RAID has very little benefit on the desktop anyway. RAID 0 configurations benchmark nicely, but it won't translate to real world applications. RAID 5 takes performance hits for parity calculations on anything without a dedicated hardware controller. If you want to do anything, grab yourself a couple of 1TB Inland drives from Microcenter and put them in a RAID 1 for redundancy.
 
Storage - I have always maintained a robust storage array. I am using 2x1TB NVME in my current system used for games and current projects. I move finished projects over to my NAS to free up space as needed. My NAS now has a 512GB SSD for system 4x3TB in RAID0 and a 12TB as backup. I will upgrade my NAS to all SSD when they drop to $50USD a TB.
MB - I prefer Asus and Gigabyte. The ASUS ROG Crosshair VIII Hero or Gigabyte Aorus Master would be my choices with the CPU you want. Plenty of PCIe4 lanes, and I/O ports.
Memory - I would recommend Samsung Bdie for high clocks and low latency on Ryzen systems. I am running 4x8GB @ 3800MHz CL16 1T Patriot Viper Steel My first time using Patriot memory and I'm impressed.
PSU - Corsair or Seasonic. I tend to lean highend when it comes to PSUs.
Case - Cooler Master and Thermaltake used to be my fav. Now I would prefer NZXT and Fractal Design but I'm building a mATX sys in a CM masterbox chasis.
Cooling - I switched to AIO for CPU a while back. I get lower CPU temps in high ambient heat compared to air. I would say overall AIO and high end air coolers have similar performance. The one advantage AIO have over air is cool temps at lower decibals. I use 4x140mm Noctua NF-A14 with my AIO for extra airflow in my case.
 
Keljian - name actually looks familiar, which is interesting since I haven't been on these boards in over a decade. I did say I'd be going w/the 3080 or 6800XT - mostly whichever has better benchmarks, but if they're similar, I'll look for a good deal. I'm not exactly going to be planning my setup on a game w/so many release issues, judging hardware that no one has benchmarks to, lol. Not going to go to 3090 levels, but I will be making sure my GPU is quite beefy.

TheSlySyl - Game cache????? I need to learn about this. It's called PrimoCache you say? I do know some of the configs and stuff save in the cloud, but I've had mixed results on relying on that. Also, I don't upload all my screenshots.

Dan_D - Yeah, in the replies here, I've learned that I need to go w/a RAID controller card... which is something else I'm now researching :/.

pAiNkIlLaHvX - What NAS are you using? I have an older QNAP. I don't really do much w/it when it comes to my desktop; occasional backups. I kind of treat that as its own ecosystem serving PLEX and such.
MB and memory - Very much appreciate the input. W/trying to figure out the storage thing, I forgot that I still needed input on those components.
PSU - Already there, lol.
Case - Same. I've had 1 Cooler Master and 3? TTs... but I think it's time for something different. I thought about making sure I had space for a rad from an AIO, but I really just don't see myself using an AIO...
 
TheSlySyl - Game cache????? I need to learn about this. It's called PrimoCache you say?
Yeah, wonderful little program.
https://www.romexsoftware.com/en-us/primo-cache/

Basically it allows you to set a ram amount as an L1 cache and then an SSD as an optional L2 cache. The data that's loaded into the cache is based on a block level of (originally) what's accessed and then once that cache fills up it prioritizes what's most accessed.
I have 6GB of RAMcache on my nvme SSD for my OS and it has a hit rate of about 60%, probably all system level files, and for my game drive I have 8GB of RAMcache and 1TB of NVME SSD cache. For games that i'm currently playing, the original loading is potentially long - just as much as an HDD, but subsequent loads go at SSD speeds (or faster!) and stuff that has no reason to be off an SSD, such as video files or one time use audio files, don't ever get moved into the SSD cache. I can get cache hit rates as high as 80-90% once a game is in the cache. A majority of that will be ramcache and those load times are basically nonexistant.

If you have any more questions, go for it. It's worked extremely well for my needs.
 
Ssd raid - I’ve rebuilt Compellant shelves with enterprise ssds as a flash tier for DBs, then migrated on to running Redis on tiered nvme shelves to ssd shelves dependong on the key value store. Use cases for 100k ops back in 2013 were limited compared to now. Be aware that when flash shelves die, it’s not like you get the usual drive or 2 going yellow to red, they all go around the same time.

I would look hard at your use case and decide whether the bandwidth available in a consumer chipset vs hedt is needed in your workflow.
It’s 1 thing to hang a bunch of drives onto a build, it’s another to actually use 2-4 of them in a workflow simultaneously.

I’m all for 1 nvme drive on chipset lanes, 1 nvme drive on mobo lanes, and 1tb+ ssds to fill in the rest.
I don’t see the use case for spinning rust, unless you have a fiber channel array somewhere.
I guess recycle your old ssds, dedicated scratch/export/materials seems really popular with the video and creative suites.

You can get a lot of those random access desktop workflows consolidated to specific drives, that’ll improve Win10 performance.
 
I have had data corruption with Primocache (probably due to manual shut downs) and decided not to use it in the end. YMMV. I did some playing around with it here: https://hardforum.com/threads/optane-memory-is-it-for-you-opinion-review.1961497/#post-1044146301

I do however have an optane memory m.2 which I'm using for a swap drive, and it works quite well in that role (static size)
I haven't had any data corruption since I turned off cached writing*, I do 100% read cache, which works fine for me.
I've lost my cache a few times due to windows updates or crashes, but retraining it is super quick.

I *do* use write cacheing on my plex drives, but literally nothing on those hard drives are volatile and it's mostly because I'm moving around files that are 2-30GB in size.

*Also since I turned off the cache on Xbox gamepass titles, that was a whole clusterfuck I don't want to relive, something to do with how they all already junction themselves into seperate hidden drives really, really fucked with primocache software.
 
Last edited:
Ok, so this thread was worth it just for learning about PrimoCache alone. A couple of questions, because I don't think I'm understanding things right - do you section off both RAM *and* disk space to make this work? Should I be getting more RAM then? Also, it sounds like maybe I should have:
- OS drive
- RAID array
- Encoding drive
- Cache drive? - Doesn't seem like the requirements on the cache drive would be much.

Keljian - I checked your link, but I don't see where you mentioned the data corruption? What kind of data corruption? Application/game files (i.e. things needed to be reinstalled to be fixed)? OS files? Data like... SQL?

Ssd raid - I’ve rebuilt Compellant shelves with enterprise ssds as a flash tier for DBs, then migrated on to running Redis on tiered nvme shelves to ssd shelves dependong on the key value store. Use cases for 100k ops back in 2013 were limited compared to now. Be aware that when flash shelves die, it’s not like you get the usual drive or 2 going yellow to red, they all go around the same time.

I would look hard at your use case and decide whether the bandwidth available in a consumer chipset vs hedt is needed in your workflow.
It’s 1 thing to hang a bunch of drives onto a build, it’s another to actually use 2-4 of them in a workflow simultaneously.

I’m all for 1 nvme drive on chipset lanes, 1 nvme drive on mobo lanes, and 1tb+ ssds to fill in the rest.
I don’t see the use case for spinning rust, unless you have a fiber channel array somewhere.
I guess recycle your old ssds, dedicated scratch/export/materials seems really popular with the video and creative suites.

You can get a lot of those random access desktop workflows consolidated to specific drives, that’ll improve Win10 performance.

Ok, I gotta be honest, I didn't follow everything you were saying here. First - if rust is referring to platters, there are no platter drives anywhere in this build. Not the RAID, not the other drives.

I see you mentioning the argument between consumer vs hedt several places, and honestly, I think I'm more concerned with consumer vs crap. If you look at my above layout: OS, RAID, encoding, and possibly cache - because all of them are discrete, and thus get performance gains just from that layout, I don't think I even need hedt anywhere. Don't need it in OS, don't need it in encoding, and w/trying to keep the RAID somewhat cost effective, that's where I'm basically just trying to stay away from "crap" territory. I *am* concerned w/drives that would give out too soon, so that's one place I need buying advice on what to stay away from.


Also, for everyone, other than the Asus Hyper Gen 4 card recommended above, does anyone have any recommendations on cotroller cards? Now that I'm trying to learn more about them, I'm seeing SAS, but I'm also seeing where a lot of controller cards are being used just for JBODs in linux/server arrangements. I need to find something that controls RAID well, and I'd *like* something that allows for scalability. From what I gather, SAS shouldn't even enter into this conversation here, but I'm still trying to figure out how these controller cards handle multiple drives. Do you just put a SATA splitter and connect multiple ones? I realize this might be the dumbest question ever asked, but I'm going through vids and documents trying to learn about things as fast as I can, lol.
 
Last edited:
Sandisk and Samsung consumer ssds did fine in an enterprise use case. I don't think the average user would see that many writes in a year, much less 5 years.

Crap has a lot to do with bios & firmware implementation. You're going to see more issues at the outset with AMD with agesa features and vendor tweaks for your use case.
 
Ok so I haven’t spent enough time on this thread yet.
Point form for brevity,

Primocache :
  • Can use both memory and a disk as tiered storage for slower disks. This is useful if you have a particular setup (eg nvme used to cache sata, or optane to cache nvme)
  • Is best set up with 4kb sectors due to most SSDs having poor 4kb read/write performance (small files)
  • Uses a chunk of memory, if you set it up to do it. More memory means more cache, more cache means more chance something stored in cache will be used, 64 gb is relatively inexpensive these days, so it’s not a stretch to go there if you need large files cached.
  • Write cache speeds up writes, however there is a risk of data corruption, and I had it on my OS drive, which necessitated a wipe and reinstall, twice. So don’t put write cache on something you want to keep
  • Will limit overall memory performance as it is constantly managing cache, whether this matters is up to your use case

Now on to drives/other stuff :
  • Sabrient rocket series NVME drives tend to be very fast, and highly recommended by forum goers. They also are PCI-e 4 compatible. Samsung is still very good. You can sometimes get the Gigabyte Aorus 1tb nvme drives cheap and they're very fast.
  • Optane memory drives are limited by 2x pcie, but their 4kb performance exceeds that of all other non optane drives. This makes them a good candidate for L2 cache if you’re running sata drives or for a swap file.
  • Optane drives in general are very fast, but very expensive compared to other drives. If you are looking for the fastest on the planet, they are up there.
  • JBOD controllers from LSI are relatively inexpensive and just work for most applications, but a lot of motherboards come with a lot of sata support.
  • My preference for motherboards is the gigabyte aorus range, they always have good cpu vrms, and having used one (see my sig) the quality is outstanding.
  • Consider the fractal design r6 case, quiet and easy to work in
  • With those raid5 setups etc consider just getting a larger NVME and partition it rather than having so many drives, much easier to manage and it will be faster if you don’t need the redundancy, and with the latest NVME drives, you have plenty of endurance. You can even provision some of the space for redundancy if you want that.
 
Last edited:
Also, sabrient do big nvme ssds, 8tb from memory.

3x4tb sabrients will work on some motherboards and will be screaming fast...2x8tb is a possibility also, depending on your budget.

Won’t set the world on fire for speed, but you would be hard pressed to get faster out of a raid 5 solution.

https://www.servethehome.com/sabrent-rocket-q-8tb-review-size-matters/

if you want faster you will probably have to settle for 2tb nvme, of which you can fit up to 3 on some motherboards, which will net you 6tb of very fast storage, 5gig-7gig a second kind of fast, per drive. (Which is kind of pointless for gaming purposes)

Of course, ram is 46ish gig a second, so cache will still help.

Risk of failure for NVME is very very low, modern day drives can do >600TB writes and still keep on kicking, making raid a bit redundant (See what I did there?) that is unless you want to run standard mechanical drives.
 
Last edited:
Sandisk and Samsung consumer ssds did fine in an enterprise use case. I don't think the average user would see that many writes in a year, much less 5 years.

Crap has a lot to do with bios & firmware implementation. You're going to see more issues at the outset with AMD with agesa features and vendor tweaks for your use case.

Very good information here. When you mention consumer line of Samsung... how low did you go? I understand there are varying levels of quality even in particular sectors of a particular vendor, and I was looking to potentially go as low as a 870 QVO. Have not heard of AMD agesa - again, I'm totally new to the AMD camp. Will have to look that up.

As for my use case, I will be a slightly higher end consumer, but definitely not enterprise. My computer is on 24/7 (currently on 44 days of uptime, and that's low for me). I work 7:30-4:30 every day, and then do quite a bit of gaming, I have hundreds of browser windows open; one running a text/web game, and tons of other apps always running - discord, 3 diff browsers, steam, lowkey, MS Teams, outlook, Excel/NP++/PDFs/Word/etc, Corsair/Hyperx software, and a couple game launchers that occasionally rotate stuff. The majority of those things take little to no IO on a regular basis, but they do *regularly* ping the disk. So while I may not be doing huge read/writes all the time, I am slightly concerned w/the thousand needles of slow death, as well as parity calculation issues.

The PrimoCache discussion:
- I would most likely only being doing cache for reads, not writes... *especially* since that's where you're seeing data corruption.
- I walked away from the thread for a while, and thought about the fact that I play several different large games regularly... daily in most cases. I've tried a few things w/cache systems before, and an issue I was always running into was the fact that, because I'm regularly loading things that exceed the size of the cache, I'm not actually using the functionality of the cache as intended. I wonder if this would be an issue w/Primo as well? Now, these were with admittedly smaller caches, so perhaps dedicating an entire 1TB drive to cache might see an improvement..

Keljian:
- Thank you for your input on Sabrent. I have seen quite a few of those pop up while looking at SSDs, but I'd never heard of it, so instinctively stayed away. Now that I know they're considered "premium" of sorts, I'll look more into them.
- As for the Raid discussion, there are several points I'd like to clear up:
-- I *do* want RAID5 for the redundancy - just of the drives/partitions I mentioned. I'm not putting OS/encoding/(possibly cache) on the RAID5, because those don't need redundancy.
-- I also want RAID5 for scalability - so I can start w/the amount of space I have now at a *minimum* and scale up as needed
-- W/the above point in mind this is why I'm looking to a RAID card controller, and avoiding the idea of using the mobo as the RAID controller. I initially thought I'd use the mobo, but after these discussions, I'm seeing a discreet controller card as an essential part of this build.
-- Also, since I'm pretty set on RAID as a solution for the bulk of my stuff, I don't think 2x8TB is going to be feasible, from a design or wallet perspective lol.

I would like for the possibility of NVMe in my RAID... if I can find something affordable, but I'm trying to learn what is possible w/the RAID controller cards, and what the limitations are.
 
Last edited:
The Asus HyperX card may give you what you are looking for: https://www.asus.com/Motherboards-Components/Motherboards/Accessories/HYPER-M-2-X16-GEN-4-CARD/
You would want to check the motherboard has pcie bifurcation support though..

However, I am still of the opinion that with the reliability of modern ssds such as they are, the chances of you losing data on them (provided they are powered on occasionally) is so low that it is almost infinitesimally small. And regards to expansion, if you have one large drive, you can always add another bigger one in future.

In my opinion you are trying to over complicate the issue.
 
Last edited:
I also think you're WAY trying to overcomplicate the issue. Get one smaller super fast NVME for OS. Get a big NVME for everything else. Maybe have good ol fashion spinner as backup or if you really want an SSD backup grab a large 2.5inch SSD and just have it set to mirror the data of the larger nvme drive. Done.

I'm only running 4 different NVME drives because I've been buying them seperately as I go. My ideal situation would only have two SSDs, one 512GB and a second 2TB or bigger. (and then multiple identical spinners for everything else.)
 
Last edited:
I'm with keljian and syl, even ssds from the Intel 320 era were fine as long as OS TRIM implementation was decent for that era.

Crytaldiskmark will scan deep enough that replicating an ssd or nvme drive is a non issue and literally hours faster than replicating an array. This has been a common upgrade task for many of us 80gb-1tb ssds to nvme drives. Most of us outgrew the drive capacity before any degradation became an issue. I have a Samsung 500gb 830 kicking around somewhere that's fine, but nvme-nvme transfer was too good to bother with an ssd slowing down my workflow.

2tb 970 evo is $250, I'd let windows manage my single main drive.
I guess the 500gb ssds can be used as cold archive and backup.
I guess the seldom used partition data can be homed on a 1tb 970 evo, another 2tb would just be long term insurance.
Whatever smaller drives can be used as scratch and export for editing.

This is uncomplicated and cleans up that weird partition table.
 
Never ever use SSDs for (unplugged) cold storage that doesn’t get touched more than once every 3-4 months.

While it is possible that it won’t be an issue, the charge held in the cells can leak and you can get corruption.
 
That "weird partition table" isn't a bug, it's a feature.

And I'm not sure I can achieve my space needs w/single disks. 4TB just for games is already too small, and that's before formatting. There are plenty of games I haven't installed that I want to, because I'm already trying to trim other games and don't have space, and we all know they're not going to be getting any smaller.
 
That "weird partition table" isn't a bug, it's a feature.

And I'm not sure I can achieve my space needs w/single disks. 4TB just for games is already too small, and that's before formatting. There are plenty of games I haven't installed that I want to, because I'm already trying to trim other games and don't have space, and we all know they're not going to be getting any smaller.
Get a large spinner (HDD) then, use primo with a nice big chunk of memory and a super fast ssd that is 500gb or so. It will be your cheapest big storage option that is quick. Sure the first time you load stuff it will not be super fast but every time thereafter you will be ok. Heck I would even enable write cache as games are not high priority and you can always re-download if you need to. You don’t need raid for games (can always download them again).

You can even get a spinner with a fair amount of on board cache these days if you try.

With steam and most other gaming platforms you can move game install folders around, have a medium SSD (1tb should be sufficient) to move the games you’re currently playing the most to SSD, then move them back to spinner after you don’t play them that often.
 
Exactly, get a large hard drive and primo cache it. As long as you aren't using Xbox Gamepass games it should work beautifully with everything else.

I prefer primocache over moving games around because I've found that moving games around to be far, far more time consuming, especially with the games I play where I really don't need to move 50GB of video files from FFXV for loading.

If you want to "force" a game onto the cache, in steam, right click a game and tell it to verify the game data. Once its done verifying, because it scanned all of the game files for errors, it'll immediately move it to the cache drive.
 
What is primo cache and what exactly does it do please?
Primocache is tiered storage for windows.

Essentially what it does is copy stuff from slow storage to fast storage when you access the stuff, that way when you access it again it comes from the fast storage.

Obviously the more fast storage you give it, the more you benefit.
 
Primocache is tiered storage for windows.

Essentially what it does is copy stuff from slow storage to fast storage when you access the stuff, that way when you access it again it comes from the fast storage.

Obviously the more fast storage you give it, the more you benefit.

Thank you this will work well with my unused 500 or 512 GB SSD I now have sense upgrading to a NVMe drive for primary.
 
I really haven’t done a ton of research for this build, and I’m doing something different this time. I’ve used Intel/Nvidia for every build, but not because I’m a fanboy. I usually buy high middle tier or low high tier, and AMD has just never had any offerings in that range when I’m looking to build. However, as I’m sure you’re all aware, this is quite a time to be alive, because AMD is kicking the pants off of Intel w/their latest offerings. I’m definitely going for a 5900x for the CPU, and I’ll most likely go for an AMD GPU as well, but I’m going to wait for the benchmarks first. If they aren't as promised, I'll go with a 3080.

The other interesting thing about this build, is the storage arrangement. I do a couple interesting things in that department. To begin, from all my builds previous… I really don't delete many things, and I have a LOT of games loaded, so I use up a huge amount of storage. Also, on my last build, I tried out RAID5. Now, while I was WILDLY optimistic w/performance (RAID5 on platters still isn’t remotely comparable to a single SSD), I did get used to the idea of having redundancy. I also was very fortuitous in this department, because out of all my previous builds, I never had a HDD fail, but the first time I tried RAID5, one of the disks failed – which was no problem! After having that happen, I really don’t think I want to go back to a single disk for all my files and stuff. The last note in the storage section, is that I’ve done a bit of streaming, but even when I’m not, I have lowkey.gg or some other capture program so I can make clips. The problem is that this is very disk intensive, and it fights for disk IO w/anything else I’m running – namely the game that I’m running that I would want to capture things from in the first place!

Here's my current storage layout:
C: 91GB free of 464GB (500GB SSD)
D (Downloads): 159GB free of 200GB
F (Files): 146GB free of 200GB
G (Games): 473GB free of 4TB
P (Apps): 444GB free of 500GB
R (Archive): 134 GB free of 400 GB
X (Desktop): 192 GB free of 200 GB

My C drive is an SSD, and the reason I’ve used a lot of it is because, in addition to some apps that will only allow installs on the C: drive, I’ve also ended up putting my most played games on it for performance. I actually have D, F, and X bound to their respective functions in the system. I like this, because when I want to go to my documents, I just type F: in the address bar and there I am! Same w/downloads/desktop. This also REALLY helps get me back up and running after a new build/restore, because all my files are exactly where I want them after a bit of configuration. D, F, G, P, R, X are all part of the RAID5 array, just split into logical drives. Currently, I’m using R for the temp storage for lowkey/obs… but it’s still part of the same RAID array, so it still fights for IO. The plan for the new build is m.2/NVMe for the C, a RAID5 of at least the size you see here for everything else…. And then m.2/NVMe to dedicate to video encoding processes.

tl;dr – This is a gaming build w/a unique storage situation. On to the parts list, where I have several questions.



CPU – Ryzen 9 5900x – pretty firm on this

GPU - Probably a Radeon RX 6900XT. Possibly a RX 6800 XT, depending on the actual performance difference, but if I’m in that range, getting a RTX 3080 might be just as much of a possibility depending on deals/etc. I do know about the smart tech or whatever where the CPU and GPU from AMD are supposed to be able to talk to each other for better performance, but again, that all hinges on third party benchmarks (let’s be serious, I’m primarily talking about JayzTwoCents).

Mobo – I don’t know? Again, never having built an AMD machine, I’m WAY out of my research zone for this, but I’m pretty sure it will be an X570 board, and most likely the AUS ROG Strix X570-E. Thoughts on this would be appreciated. One of the things I need to make sure is that the board will support my storage strategy, so I would need at least 2 NVMe, and plenty of SATA connections after that.

Cooling – I have never done a water build… don’t ever really want to. I’m pretty impressed w/the fact that Noctua FINALLY got away from their aged beige color scheme, so I will most likely go with a NH-D15

Memory – Again, I haven’t done much research, but I doubt there’s that much difference, so I’ll probably look for a good BF deal, and I’ll be getting 32GB, unless you guys shout at me.

PSU – Anything 1000+ watts from Silverstone or Corsair – will look for some deal on BF

Case – Another area I haven’t done a lot of research. My primary concern here would be great airflow and proper room/accommodations for 2 NVMe/m.2s, and 3-4+ SSDs.

Storage – Here’s where we get into the fun, and I’m hoping you have good knowledge/advice on performance, problems, etc to help guide me away from noob mistakes and into a solid configuration.

C – NVMe or m.2, probably 1TB, but potentially 500GB. As you can see from the above, I currently have a 500GB C, and while a lot of space is used and I have to fight to free space, that’s mostly because of games, which should be solved if the RAID5 option is chosen wisely. The reason I’m flirting w/a 1TB option is because I have a friend who might be able to get me a good deal on a 1TB WD black NVMe. That being said, I’m not tied to anything at the moment, so I’m very open to your thoughts on this.

Video/Encoding – Again NVMe or m.2. Again, 500GB or 1TB depending on deals. I’m less worried about the performance of this one, because the major performance upgrade will just be getting the video encoding to work w/a separate drive. I could even go down to an SSD for this option. I thought about doing an elgato device or something similar, but I don’t want to use this just for streaming. Even if/when I do other video/music editing on this drive, final products won’t necessarily live here, so I don’t necessarily need to carve out a huge chunk of storage space for this attribute.

Everything else – Here’s where we get back to RAID5, and I’m shooting for at least the size I had before (as you can see, I’m already pushing the limits of my games drive, and that’s with some of the games being offloaded to the C). So that’s 3x3TB drives minimum, and SSD at least. The thing is, in my first RAID5 foray, I completely underestimated the performance, and I don’t really want to do that again. I know there are a plethora of differences in SSDs when it comes to speed, especially when you tie that in to size. I know there are some who warn about using SSDs for RAID, because of the fact there are is so much IO w/the parity and everything, but I think most of those fears are assuaged by pretty much anything currently available on the market. That being said, this could be another area where I’m underestimating the potential risks. Does anyone have knowledge/thoughts on using SSDs in RAID? Also, because of the need for multiple drives, and because of how much large capacity SSDs cost in the first place, this is where I really have to take price into consideration, because things can scale up quite quickly. I had previously stated that I need 3x3TB drives… but the cheapest option right now is a 4TB drive, and actually there really aren’t any options at the 3TB range that are affordable. So really, I’m looking at 3x4TB drives.

The cheapest option is the Samsung 870 QVO. Then you’ve got a couple other Samsung options, WD, SanDisk and some Kingston options. I have 0 knowledge of the reliability of Kingston at the SSD level. However, after that, you get right into m.2 options. I’m pretty sure m.2 would be better, but you’ve got to have a mobo that can handle that.. right? And while there are some mobos out there that handle 4-6 m.2s, part of the idea of RAID5 is scalability, so I don’t really want to be bound by a number that low (considering my C and video drives would also be counted in there), even though realistically I wouldn’t be doing more than 4 drives in my RAID, especially since they’re already bumped up to 4TB instead of 3.

However, the other option is to go lower, and get more than 3x2TB drives. So, if I’m shooting for 6TB minimum, and RAID5 is n-1, then I would need at least 4 SSDs to get to my current storage size. The 2TB drives are roughly half of the price of the 4TB drives in their respective categories, but I could even go to 5x2TB drives, have more space than I currently have, and still spend under what it would cost for 3x4TB drives. However, once again, we get back into the discussion of speed/performance of larger drives vs smaller drives. I’m way out of my league here, because the only thing I really know about SSDs…. And m.2…. and NVMe… is that there are SUCH huge differences in performance over a wide array of factors. Also, if I push to 5x2TB drive, then add in the C and video drives, and I do m.2/NVMe, then I bust out of the limit of most mobos. Course.. I could drop the C and video to SSDs and then use all m.2/NVMe slots for the RAID.....



Ok.. so I posted this on a different forum... someone mentioned that AMD boards don't have RAID, but I could just use a RAID controller. As I was looking those up, however, I also saw that apparently RAID5 is deprecated?? Even though I'm currently using it on this PC and in my NAS?

Now I'm thinking I might need to rethink my whole storage system *outside* of RAID5...
Hi

Interesting idea, it was your title that grabbed me - I don't play games very much but I do content creation. I do think your requirements are not too dissimilar, I try to follow how that data flows through the system for the task/processes I do.

First off - you have too much confusion in your setup IMHO, so many small drives, it is very messy, I know you know about backup and RAID and all that jazz!

If you manage backup and redundancy outside of this, I would given you do not really use that much storage, just have NVME disks and be done with it! Get a X570 board with as many slots as possible, you will not be able to use a AIC like the Asus and Gigabyte due to the limitation in PCIe lanes (I am there and going Threadripper now)

so If you identify how you see your data flowing, i see it a bit like this based on what you wrote above

you are unlikely to need more then 3 different locations - if you backup elsewhere!

1 disk for OS/Application - 1 TB Samsung or Western Digital gen 4 PCIe NVME
1 or RAID-0 n-disks - Files/downloads, archive, desktop
1 or RAID-0 n-disks - games/temp

or

do 1 OS disk - 5-7000mb/s
and 4 disks in RAID-0 for everything else! let it run at 13000/15000mb/s

forget SATA and SSD - that was yester-year! if you got money to burn, then get one of those Intel Octane drives

anyway, it should be possible to get a board where you can have 5+ NVME disks, then select a board that supports your networking

running through your system specs

CPU: AMD 3900x or better
Cooling: Noctua D-15 black!
motherboard X570 - 5+ NVME with or without Storage AIC
RAM +32GB RAM 3600 or faster - with the tightest timing you can afford

Storage
Choose between 1 or 2 TB Nvme PCIe Gen4x4
Sabrent
WD SN850
Samsung 980Pro

GPU - whatever you can afford! AMD 6x00 XT or Nvidia 30x0 or RTX-20x0....

Power supply 850w if you are using mid range GPU go higher if you are using top tier!

Case: something nice an big, though if you are not using water cooling or many add-on cards, then you can get away with a smaller one (i just like them big)

good luck
 
I'm really trying to talk myself into that 8TB Sabrent drive. I know that's insane, but I really can't go below 4TB at this point. I did some more research on RAID5, and apparently NVMe just doesn't work with it at all.. so either go to a slower speed SSD (I know, that sounds funny) in order to keep my goal of being expandable and hardware redundant, or go to 8TB... I don't see another way.

With that in mind, I'd probably do a SATA 1TB for OS, same for video, 1 TB NVMe for primo cache, and the Sabrent 8TB NVMe.

Tived - you're saying the high end ASUS/Gigabyte boards don't have multiple PCI-e? It looks like they do. Also, the WD SN drives were on my radar for the smaller drives, and yeah, people keep touting Sabrent, so I think that's where I'll go for the bigger drive.

I need to find the right board for this, and I'm having a rough time figuring out why some are hundreds of dollars more or less. I was hoping to figure this out by black friday (today), but honestly, there don't seem to be very many good deals on the things I'm looking for.
 
Just mirror for redudancy, you don't need to go raid 5.
I have my important stuff set to mirror updates 1/day (both cloud and local.)

The odds of two NVME's simultaneously fucking up so catastrophically that you can't fix it is incredibly low. It's the type of hardware error so massive it would likely destroy an entire raid array anyway.

You also don't need to have a primocache drive if you're not going with a spinng hard drive for the majority of storage for it to read off of. While there is an improvement on speed from going SSD to NVME, its absolutely NOT worth the cost of getting both. Just get a bigger NVME in the first place.

If you do end up getting that 8TB NVME though, I'm just gonna be super jealous.
 
Last edited:
Ok so rather than a whole lot of 1tb drives

1x2 or 4 tb, multiple partitions for OS & video
1x8tb for whatever else.

All NVME, add primocache if you want but no write cache on the OS drive

Done

The 8tb is warranted for 1800TBW (1.8 petabytes) so will likely be able to double that. I assure you that you won’t hit that any time soon


if you need it to be fast the 980 pro is screaming fast...
 
Last edited:
I'm really trying to talk myself into that 8TB Sabrent drive. I know that's insane, but I really can't go below 4TB at this point. I did some more research on RAID5, and apparently NVMe just doesn't work with it at all.. so either go to a slower speed SSD (I know, that sounds funny) in order to keep my goal of being expandable and hardware redundant, or go to 8TB... I don't see another way.

With that in mind, I'd probably do a SATA 1TB for OS, same for video, 1 TB NVMe for primo cache, and the Sabrent 8TB NVMe.

Tived - you're saying the high end ASUS/Gigabyte boards don't have multiple PCI-e? It looks like they do. Also, the WD SN drives were on my radar for the smaller drives, and yeah, people keep touting Sabrent, so I think that's where I'll go for the bigger drive.

I need to find the right board for this, and I'm having a rough time figuring out why some are hundreds of dollars more or less. I was hoping to figure this out by black friday (today), but honestly, there don't seem to be very many good deals on the things I'm looking for.
Hi,

I think you mis-understood me regarding the PCIe lane issue - it’s a CPU/chipset issue

Ryzen and X570 (all chipsets that support Ryzen) have a limited amount of PCIE lanes I think it is only 24

therefore you have to think carefully about what add on cards you get.

don’t mix ssd, sata with NVME, just get the later and be done with it.

that is just my opinion

sorry about not finding anything at the Black Friday sale
 
Back
Top