raid 50 performance

Red Squirrel

[H]F Junkie
Joined
Nov 29, 2009
Messages
9,211
I bought 8 2TB drives with intention of doing a raid 10. I then started thinking, I could also maybe do a raid 50. Something like this:

1: 2TBx3 raid 5 = 4TB x 3 raid 0 = 12TB using 9 drives. (would have to buy another)

OR

2: 2TBx4 raid 5 = 6TB x 2 raid 0 = 12TB using 8 drives

VS raid 10:

3: 8x2TB raid 10 = 8TB


The 2nd option is looking attractive. What kind of performance can I expect out of something like this vs a regular raid 10 or raid 5, would it be somewhere in the middle?
 
What RAID card ?

What usage ?

RAID5 is a bit risky nowadays, 50 is worse of course.
 
Nothing special for the card, some IBM card that's reflashed to be jbod. This would be using Linux mdadm raid. So 50 is actually considered risky? Did not realize.
 
What RAID card ?

What usage ?

RAID5 is a bit risky nowadays, 50 is worse of course.

I, too, would like to know why you think this would be worse.
RAID5 is no more "risky" today than it was in years previous.
 
I'm guessing he is saying it is riskier compared to safer options that are now readily available - eg: Raid6 (RaidZ2) or even RaidZ3.. back in the day Raid5 was it but now ZFS and MD Raid and HW Raid controllers are all much better..
 
I think its about the URE articles about how raid5 stops working in 2009... However the whole concept of saying that every 12TB you read from a raid5 array 1 sector will be corrupted is FUD. Although with that said I for the most part do not use raid5 anymore. That is except for the machine I am typing on. A HP i7 desktop with only 4 SATA ports on the motherboard (even though the chipset supports 6).
 
Last edited:
I guess one issue with raid 5 is the rebuild times as disks get bigger. It can take nearly a day to rebuild a large raid 5 array. But yeah I never really understood the FUD about raid 5 suddenly not being stable anymore. Never had issues. Of course there are probably better things out now such as ZFS, but for now I'm wanting to stick to mdadm most likely.

So from playing around it seems raid 0 can't be expanded, I guess it makes sense, as that would involve changing the way the striping works and it's probably quite involved and they just never coded that in. So that explains why I can't expand my raid 10 either.

Here's another thought, what about doing small raid 0's with 2-3 drives, then doing a raid 5 with those? Am I crazy? Would I get better performance than just a straight raid 5 out of that or am I better off just doing a regular raid 5 or 6? I'm leaning more towards raid 6 than 5 though, but I might do 5 as this LUN is probably going to be mostly for VMs so I can afford taking some risks to an extent as it wont hold actual data, but still would be a pain if I have to rebuild the VMs. Of course I'll probably have backups of the VMs somewhere anyway.

Doing raid 0 + 5 seems kinda intriguing though, I just don't know if it's even common, and maybe there's a reason for it, such as no benefit over regular raid 5. Curious to know what you think.

As a side note, is there a big performance degradation between raid 5 and 6? I think if I would to take the KISS approach doing a raid 6 with these 8 drives is probably my best bet really. I should get half decent performance with this many spindles anyway right? I have a raid 5 with 8 drives and I find the performance is decent for every day use. It's nothing impressive if I run benchmarks, but general usage is fine.
 
Definite yes to RAID5 being risky, move to RAID6. Mdadm actually has great performance for software RAID and doesn't limit you in terms of rescue, hardware or distro. But if your utmost concern is data integrity I'd have to strenuously recommend ZFS for the file system, ECC memory, RAIDZ3 (equivalent to what would be RAID7) and of course offsite backups (Backblaze or Crashplan).

Update: Also if you're more interested in speed you could go RAIDZ2 (like RAID6) with multiple vdevs giving you a storage setup very similar to RAID60, which is what I've got at home; 8x 3TB disks in RAIDZ2 (two 4x disk vdevs striped together).
 
Last edited:
I don't want to do ZFS as you can't really live expand, and I don't want to switch my OS as I still have two md arrays.

Though, I am starting to consider btrfs... is that stable now? The wiki says it is, and you can live expand raid 10 so I'd just go back to my original plan of raid 10. Though this install is about a year old so not sure if the kernel a year ago supported it or not. I may have to just stick to md raid 10 and live with the fact that I can't expand it. AT least once I figure out how to stop the broken array. It keeps saying that it's busy and refuses to let me stop it so I can't release all the drives.
 
I read otherwise, or is that not the case? I read that the only way to expand is to basically create a whole new array and combine them, you can't just add 1 more drive (for raid 5) or 2 more drives for raid 10. But either way, I don't want to switch OSes, this server can't be rebooted, so I need to stick to mdraid or maybe btrfs if my kernel supports it, still need to read up more on that.

But I also just learned you can't live expand raid 10 with mdraid, so that's a pretty big bummer right there. With btrfs you can though but not sure if I'll be able to get that to work on that machine.
 
You can't expand a vdev, true. But if you had a raid50, you could add another raidz2 vdev to it. For raid10, you absolutely can expand a raid10 two drives at a time. I've done so.
 
You should be aware of the fact that ZFS uses newly added vdevs only for new data, or at least it starts to balance the vdev only for new data. That means that a lot of new data will go only to the new vdevs till they are balanced again.
 
As I understand it, unless the old vdevs are almost full, it should be balancing across all vdevs. So due to COW, things should gradually rebalance. You can speed this up (I did) by doing rename&copy operations on files if you want...
 
Yeah, ZFS always has to write at least the minimal record size to each disk in the pool. But it will write more on the more empty vdevs.

You can re-balance your pool yourself by simply moving data from one directory to another though and ZFS will rewrite it (more to the empty vdev) thus making more free space on the previously more full vdev.

Also OpenZFS is adding some new features to help with vdev balancing. Allowing you to set quotas on vdevs so any one vdev doesn't get into the upper 90% long before the others do.

That's really where problems start because when once vdev gets to 95% full and the other is 50% full when it writes new data it has to find a bunch of tiny random holes on the 95% full vdev and this slows the whole write down.

With the new feature, say you have 1 vdev and you fill it up to 50%. Then you add another vdev and start filling it. Without quotas, ZFS will keep filling both up, and before long vdev 1 will be at 90% and vdev 2 might only be at 60%.

With the new features you will be able to set say a 70% quota on vdev 1. So it will tell ZFS to start writing the smallest record size to vdev 1 once it reaches 70% capacity instead of once it has already run out of contiguous space.

The downside is you will not take advantage of the extra vdev striping for really high performance, but the upside is you wont run into the abysmal performance issue of ZFS having to find tiny random holes to write it's stripes ever if you set your quotas right.

If you want to find out more about this you can find out here:
https://www.youtube.com/watch?v=UuscV_fSncY&t=0m30s

from: http://www.beginningwithi.com/2013/11/18/openzfs-developer-summit/
 
Last edited:
Back
Top