Help settle and argument - RAID over 8 spindles

Joined
Jul 28, 2005
Messages
2,836
Alright - discussion at work has come up with a "what is better".

Since none of you know what we are doing, other than what I'm about to tell you... the question is how would you do it.

HPE DL380.
8 600GB 15k SAS drives.
Hyper-V server running 3 VM's. (DC, File server and one other app server with nearly zero I/O)
Server 16.

The only info I'd like to give you, other than Hyper-V is that disk performance is fairly minimal outside of boot times. That's generally once a month and outside of business hours.

The very last consideration I'd like to give you is that these are all remote office servers. Generally, we catch drive failures within a few hours but in some cases it's been a few days or weeks. HPE is 4 hours on site after we call.

Thank you :)
 
Well, better is a relative term. Since these are remote locations to maintain the best uptime I would suggest a double parity solution of some kind (Hardware RAID6 if you are windows bound as it seems you are). This way any two of the disks could fail and still maintain uptime without data loss (insert RAID is not a backup mantra here.) What RAID card do you have in the proliant? How are the drives currently configured? If it is taking you up to a few weeks to catch drive failures I suggest a better procedure for monitoring of your remote servers.
 
Well, better is a relative term.

I completely agree, and it's more complex than what I want to imply for fear of swaying one answer over another. I'm intentionally being vague to see what results I can get from others.

Controller is a HPE Smart Array P408i-a SR Gen10.

The hours/days/weeks issue is currently being addressed with HP OneView monitoring, so that's in the works. The likely endgame is we will be on top of things and the most realistic length of a degraded state would be in the realm of 72-ish hours. Think late Friday afternoon failure and a call in/replacement on Monday morning.

Backups are performed nightly.

We currently are debating two likely solutions. Raid 1 (2 drive OS) + Raid 5 or 6 with warm spare for VM's, or full Raid 10 over 8 disks (120 gb OS partition and the rest for VM's).

Storage space is of no concern - no matter the drive configuration, we will likely never see or have a storage space concern.

I'm completely of the opinion that both are correct answers, but I'm still at the "what would someone else do" question.
 
Since speed, IO and capacity are not all that important,

If you want to keep your drives for OS/ Data separate & are not concerned about the space.
2 drives Raid 1
5 drives Raid 6
1 Drive hot spare

Otherwise if you want to just have 1 big volume and partition it, go with
7 drives Raid 6
1 Drive hot spare

IF capacity was an issue, then all 8 drives in Raid 6 would be a good option.
 
You want some redundancy and backup so yeah the higher ones probaly....but with large sizes of various single drives now and affordable, raid is on it's way out I think. why raid a bunch smaller drives...when I could simply go pickup like two 4tb drives and never fill them up etc...kinda thinking... can't fill up what I have now man.....barely scratching 2tb...
 
Last edited:
I run 3 VMs on HyperV for 7 years now, 24/7/365 on 16 drives, 2 Xeons 5650, 32 GB, 2008R2. Not a single hour lost due to bad drives tho meanwhile 3 went bad over the years.
Each VM has its own 4 drives in Raid-6, Host has 3 drives in Raid-5, 1 x HotSpare. Dell T710

Raid-6, I would never go Raid-5 or less on critical machines.

my 2 cents
 
I'd have ESXi booting from SD card and RAID 50 for the 3 VMs, if there was an instance of SQL, I'd use RAID 10 instead. HP used to support 2012 R2 installs on SD, but I've heard mixed results with 2016.
 
mirror your boot drives, and do a large raid5 volume with the rest of the drives. Raid 6 does double writes, has a much higher over head processing wise and basically will half the performance of your controller.

Everyone says performance isn't a concern.. till it is. Don't forget about a backup solution on a separate piece of hardware.
 
Nah he want's the safety and backup security sounds like...should go all HDD then...sacrifice the speed for security...theres shit now that will probably run for yearsssss. theres hdds from like 10+ years ago that still work...some might have like dead sectors and whatever else...but they still work though....
 
Otherwise if you want to just have 1 big volume and partition it, go with
7 drives Raid 6
1 Drive hot spare
I would like to do something similar to the OP, i was thinking on a LSI MEGARAID SAS 9271-8I + 8x Seagate ST4000DM000 (moving them from a storage server to this raid), basically im trying to achieve 500 mb/s+ for video editing, do you see it viable option?
 
I would like to do something similar to the OP, i was thinking on a LSI MEGARAID SAS 9271-8I + 8x Seagate ST4000DM000 (moving them from a storage server to this raid), basically im trying to achieve 500 mb/s+ for video editing, do you see it viable option?

The drives you listed are slower 5400rpm class drives (they say 5900rpm). Those drives are designed to be cheap and low powered & are a poor choice if you actually need high end sustained throughput, however they can be made to work if that's all you have to work with.

IF you are using this for active video editing, then your needs are a lot different.
What you need is a lot of secure storage for one part and very high sustained I/O for the other part.

First what you need is a stable secure backed up storage for the original footage and then also for the end result finished footage.
In which case a bunch of drives in Raid 6 (or raid 6 with hot spare) would work well.

Second what you need is very high I/O scratch drives for the video footage that you are actively working with and the files you are writing to.
Depending on the capacity, for this part I'd suggest NVMe / SSD if you can afford it, but otherwise Raid 0, Raid 10, Raid 5 for the scratch storage because Raid 6 will have too much overhead for your controller to give you the best performance when you need maximum straight write / read speeds.

You might make it work all on just 1 storage platform if you need, but Raid 6 across 8x 5400pm drives is going to be sluggish for maximum sustained write speeds as compared to other options.

What I might suggest you do if you want to use those drives, is take those 8 drives and do a Raid 6 + hot spare out of them, and use that for your bulk archival storage & then invest in a couple big SSDs / NVMe drives, or a couple 10k or 7200rpm drives for the scratch drives.
 
Since speed, IO and capacity are not all that important,

If you want to keep your drives for OS/ Data separate & are not concerned about the space.
2 drives Raid 1
5 drives Raid 6
1 Drive hot spare

Otherwise if you want to just have 1 big volume and partition it, go with
7 drives Raid 6
1 Drive hot spare

IF capacity was an issue, then all 8 drives in Raid 6 would be a good option.

no reason for hot spares, they are useless unless you plan to put this server in some remote place that you can not get to easily. Since it is not really a hot spare since it has no data on it, put that drive into the array and be done with it and just keep a drive around anyways, especially with Raid 6 or Raid 10.

Raid 10 and be done with it, faster rebuilds, chance of 2+ drive failures, depending and just have backups anyways.
 
I run 3 VMs on HyperV for 7 years now, 24/7/365 on 16 drives, 2 Xeons 5650, 32 GB, 2008R2. Not a single hour lost due to bad drives tho meanwhile 3 went bad over the years.
Each VM has its own 4 drives in Raid-6, Host has 3 drives in Raid-5, 1 x HotSpare. Dell T710

Raid-6, I would never go Raid-5 or less on critical machines.

my 2 cents

Just do a 4 drive raid 6 and be done. hot spares are useless, again unless you have some server in a remote location no one can get to at all easily.
 
I'd have ESXi booting from SD card and RAID 50 for the 3 VMs, if there was an instance of SQL, I'd use RAID 10 instead. HP used to support 2012 R2 installs on SD, but I've heard mixed results with 2016.

Should be using raid 10 anyways, raid 5 is bad enough, raid 50 is even worse.
 
mirror your boot drives, and do a large raid5 volume with the rest of the drives. Raid 6 does double writes, has a much higher over head processing wise and basically will half the performance of your controller.

Everyone says performance isn't a concern.. till it is. Don't forget about a backup solution on a separate piece of hardware.

raid 10, raid 5 has NO place with spinning rust these days with large drives, period, unless your using SSD's you should not be using Raid 5 anymore.
 
Raid 6 with 1 or 2 hot spares and be done with it.
They are 600gb drives not multi TB drives so even raid5 would be ok but I would go with raid6.
 
Sorry ya, i completely missed the 600G SAS drives, my bad! Again though, dont bother with hot spares just put em all into the array for more performance and space and keep a cold one on the shelf!
 
600gb drives I would put the os on 2 in RAID1 5 in RAID5 and a single hot spare

We have been moving back to RAID 5 for SSD volumes since rebuild times are so fast. 600gb 15k should rebuild fast enough to not cause issue.
 
I always only use raid 1 or raid 10.

raid 5/6 is too complex and risky for my liking.

I also usually favour software raid over hardware raid to avoid vendor lock in and get software raid features.
 
Also with Raid 5 /6 the MASSIVE performance hit you take if a drive does die vs Raid 10.
 
raid 10, raid 5 has NO place with spinning rust these days with large drives, period, unless your using SSD's you should not be using Raid 5 anymore.

Raid5 still is a valid level of protection. Used it for tier 2 level protection for years on SASs drives. Not everyone can afford to do raid 1/10 everything and its a waste. I get your statement for chumpy remote offices, and home implementations.. but in enterprise installs? Still very very common and a solid tier.
 
Also with Raid 5 /6 the MASSIVE performance hit you take if a drive does die vs Raid 10.

Depends on the hardware controller, and the implementation. There should be virtually no impact with a drive fail. Any enterprise class platform that takes a performance hit when a drive fails would go out of business after a few years.
 
RAID-1 for the OS is a waste of space. Depending on what kind of OS, particularly for VM hosts, i'll configure a 45-60GB virtual disk in the controller and the rest for a VD containing the config, VHDs, datastore, etc. In this circumstance, RAID-5 would be okay as write latency is not important but I have a suspicion that RAID-5 is particularly hard on 15K drives. If lifetime of the drives is of any concern, RAID-10 gets my vote. Stay away from cacheless controllers for RAID-5.
 
Well, the config isn't really great. So... 2x600 mirror for OS, 2x600 mirror for file server, 2x600 mirror for other VMs.

Why? Just trying to maximize reliability and flexibility and rebuild speed.

Why is it a bad config? The OS should occupy mirrored low density stuff, possibly even tiny cheap flash (cards).

If you had that in the mix I might be tempted to RAID 5 the big drives. RAID 6 if you're worried about failure during rebuild time (which can be very long with drives that large). Could be awful if there's not good power at the remote locations though.

I'd probably stick with the 3 mirrored solution, even with the waste on the OS mirror.

Another option would be to RAID10 the whole thing, not as flexible, but maybe we don't care since it's remote. You'd still get pretty good reliability and good rebuild speeds.
 
^^^ too much too complex..and too much size restraints, as you said raid 10 the entire thing.., in the end ALL the commands are going through the same single raid card and the proc on the raid card in a specific order

1 big raid 10 for maximum performance and partition off the OS drive in case you need to do a reinstall of just the OS.
 
Well, the config isn't really great. So... 2x600 mirror for OS, 2x600 mirror for file server, 2x600 mirror for other VMs.

Why? Just trying to maximize reliability and flexibility and rebuild speed.

Why is it a bad config? The OS should occupy mirrored low density stuff, possibly even tiny cheap flash (cards).

If you had that in the mix I might be tempted to RAID 5 the big drives. RAID 6 if you're worried about failure during rebuild time (which can be very long with drives that large). Could be awful if there's not good power at the remote locations though.

I'd probably stick with the 3 mirrored solution, even with the waste on the OS mirror.

Another option would be to RAID10 the whole thing, not as flexible, but maybe we don't care since it's remote. You'd still get pretty good reliability and good rebuild speeds.

the mirror scheme isn't bad, and being a remote office, one you will not have eyes on all the time.. it seems like a decent fit. Use a UPS to smooth out power issues and it shouldn't be a problem. Raid6 has its place, but its also does double writes and depending on the controller, will half your performance. If you need more space, a NAS might be better fit with a better longer term vision for growth.
 
The idea of one large VD in RAID-5 or 10 is only advisable if the DL380 in question is gen 9 or newer and in UEFI mode otherwise the OS will only allow 2TB on the boot disk, partitioned or not. RAID-10 would lose up to 400GB and 5 would lose 2.2TB in MBR mode. This is why sizing the boot disk in the controller VD level is necessary, at least for older servers.
 
Last edited:
I always thought it was best practice to follow the formula for number of drives in a raid5/raid6 as :

Total drives-Allowed failures = some number that you can hit with 2^x

So if you do raid1 for the OS, that leaves you 6 drives.

Either do a raid6 of 6 drives
or a raid5 of 5 drives + hot spare.

This is assuming you don't have vm doing a ton of small random i/o and performance isn't a concern.

Otherwise I'd Raid10 the whole thing, or maybe do a couple of raid1s for boot + each vm.
 
Back
Top