RAID 6/Z2: how many disks?

EnderW

[H]F Junkie
Joined
Sep 25, 2003
Messages
11,249
For those using a RAID 6 or Z2 pool, how many disks are you using per pool?

I set up mine as 6 which is more on the conservative side as this is for data that is not backed up anywhere else.

However, after parity drives, ZFS overheard, formatting, and leaving some free space, I’m only getting about 50% of my advertised hard drive capacity for actual storage.

So I’m thinking about moving to 8 drive pools.

Curious what everyone else is using.
 
Ya lets say you have six 10TB disks. Actual usable space is something like 9.5TB I think. So since two drives are taken by parity data, that means 4 are usable. So thats 38TB of storage space for "60TB" of disks.
With the attrocious write speeds of RAID6, you may want to consider remaking the array as a RAID10, which would give you 28TB usable, but would give you probably 10x faster write speed.


I personally have 8 drives in a RAID6 array right now. I would not go any more than that and IMO that is already a little high. With disk sizes as large as they are now days, sure it is nice I can have 2 drive failures and still keep my data, but the rebuild time will probably be an entire week or more if a drive fails.
 
Guys I understand how to calculate how much space I’ll have under different scenarios. The question is more what is your preference/comfort level of the trade off between capacity and redundancy.

As an aside, even with 10Gb, network is still the limiting factor on my write speeds on 6 disk RAIDZ2.
 
I would not go over 10 drives in a raidz2 vdev.

With that said at work I have a few 10 drive raidz3 vdevs.
 
Last edited:
Guys I understand how to calculate how much space I’ll have under different scenarios. The question is more what is your preference/comfort level of the trade off between capacity and redundancy.

If it's not backed up, then doing mirrors is more likely to keep your data safe. Biggest issue is rebuilds down the line; 5 / Z1 involves significantly more rebuild work than mirrors for a failure and 6 / Z2 even more.

With six or more drives in mirrored pairs you're going to be bumping up against 10Gbit (i.e. ~1.1GB/sec) linear transfer rates and your random I/O will be higher.

I only use parity stripes (Z1/Z2/Z3) for backups now.
 
Yeah I think I’ll just stick with 6 drive pools especially since it’s not backed up anywhere else.
 
I'm using a single vdev 16 drives wide in ZFS2. I wish I had used fewer drives. I dont have the chassis space to easily use add another vdev of the same size. So my choices are: use a vdev with 12 drives in ZFS2, or get creative with racking hdds behind the backplane of a 12-drive chassis.

On the other hand, if I used 6-drive vdevs in ZFS2, for the same 32 drives it'd be expensive in parity. What's that? 5 vdevs, 10 parity drives. Nah. Not gonna do that.

Result: dunno. Maybe 10-drives is the sweet spot.


I only have gigabit networking at the moment. Even with lacp, it would saturate the network well before the drives were an issue.

EnderW
 
No backups is ballsy regardless of RAID level \ number of disks. What's the fallback in the event of cryptolocker or file deletion?

I might try it on my media server since everything can be re"acquired", but can't imagine not having backups for anything important.
 
No backups is ballsy regardless of RAID level \ number of disks. What's the fallback in the event of cryptolocker or file deletion?

I might try it on my media server since everything can be re"acquired", but can't imagine not having backups for anything important.
Unimportant data, very voluminous. Going that way myself shortly. Backup costs too much. Even drive parity gets expensive.
 
Unimportant data, very voluminous. Going that way myself shortly. Backup costs too much. Even drive parity gets expensive.
Same here. I don't wanna spend 2 weeks re-ripping all these discs but I'll take the risk to save a few grand in hard drives and servers.
 
I have two arrays per my sig, 8x 4tb drives and 8x 8tb drives. I did this because ports on cards are typically multiples of 8. Also my 4tb array was originally on an old controller that only had 8 ports, so to use both arrays I had to limit the new one to 8 as well. I think 8 is a good number really for capacity & parity tradeoff. Once I got an nvme and 10gbit fiber between pcs, the array's limitations of around 350MB/sec were discovered. I don't anticipate needing to revisit my data store needs for at least another 2 years, and at that time I'd be upgrading the 4tb array with 12tb or larger disks for another raid 6. I back all the important data on there up with Backblaze, about 30TB on there right now.
 
Back
Top