Replacing an old RAID Hyper-V setup, and have couple questions...

ZenDragon

[H]ard|Gawd
Joined
Oct 22, 2000
Messages
1,698
So I've been running an old LSI/3ware 9650SE-8LPML raid card with a few WD Red 5400RPM 4TB drives in raid 5 for a few years. I've transferred this raid setup between several server builds, and have not had any issues with the exception of a single failed drive when I first set it up. It is however getting a little long in the tooth so to speak, and I am looking to upgrade and have a few questions. I've been kind of out of the hardware game for a while years now other than building basic gaming setups and such, so please pardon my ignorance.

I will just start by saying I do not have particularly high storage requirements. It has basically served as storage array for a few Hyper-V VMs and some work stuff mainly development projects running various servers. In hind sight I am not sure why I went with raid 5 for the setup, and I am certainly not stuck on that choice, it seems there are better options these days.

Anyhow, to the point. My first question is simply in regards to hardware vs software. I've been reading a few posts online that seem to suggest that a flash backed hardware controller should be used for VMs. However others are suggesting that unless you go with something high end that software is still often better. One of the better posts I found on the topic is here: https://serverfault.com/a/685328, but the details are a little overwhelming. I cant afford an SSD array with the storage capacities that I actually do need, so spinning drives are still necessary. Which has me leaning towards a hardware controller with a WB cache. Please correct me if I am wrong.

My second question is in regards to the best raid configuration. Everyone seems to be suggesting that RAID5 is bad, although I have only had one drive failure for the life of this system and had no issues doing a rebuild. I will however heed the warnings of the people better educated on the subject than myself. Some people are suggesting RAID 10, some say Raid 6 is ok on a budget, although with a write performance penalty which is probably not good for VMs. The primary limiting factor here is cost per GB, so its either more smaller drives (4-6 perhaps) or less large drives( probably 3), which obviously dictates which configurations are possible. I am looking for roughly 8-10TB in total, which I feel like is going to be a hard target to hit on a budget. At this point my first priority is hitting my space requirements, VM performance is important but is secondary. I mainly just use these VMs in support of my development, they are generally not running 24/7 and go up and down frequently. So where is the middle ground here?

P.S. Sorry for the long post.
 
Last edited:
So I'v
P.S. Sorry for the long post.

general recommendation is RAID 6 if your going to be doing raid, as RAID 5 has a higher risk of a second disk failing when your rebuilding as all disks are at max load (as to why its strongly recommended to stay away from it), RAID 10 is an option but need 2x the disks for same storage
 
Back
Top