How best to configure a 24 drive zPool??

Deadjasper

2[H]4U
Joined
Oct 28, 2001
Messages
2,584
Been researching this and at this point it's clear as mud. Need some help.

Supermicro 24 bay SC846 chassis
X8DTE-F MB
Xeon E5645 x 2
48GB ECC RDIMM (Is this enough?)
LSI 9201 8i JBOD Controller
24 2TB HD's (mix of SAS and SATA)

My first plan was to make 3 8 drive vdevs but after using the ZFS calculator here https://wintelguy.com/zfs-calc.pl I decided this was a stupid idea as capacity goes to total shit compared to 2 12 drive vdevs.

Since I've never done a multi vdev zPool I'm a bit unclear as to how to proceed. Do I create each vdev and then combine them into one zPool? My goal is the largest capacity zPool using double parity. This will be for a media server so performance isn't critical.

TIA
 
i would do 6 drives in a vdev.

in freenas after you create the first vdev and zpool, when you 'add more drives, it will let you add another vdev to a zpool as long as it is created as the same vdev type as the other. it is a simple process when you play with it some.
 
i would do 6 drives in a vdev.

in freenas after you create the first vdev and zpool, when you 'add more drives, it will let you add another vdev to a zpool as long as it is created as the same vdev type as the other. it is a simple process when you play with it some.

Thanks. Gonna play around with it some this weekend.
 
Depending on the situation I would lean to 2 x 12 drive raidz3 vdevs. Or at least that is what I would do at work where I can't have down time for reconstructing an array and I can't have a second system duplicating the first (no budget for that). I do have LTO7 tape backup and offsite storage but restoring a 50TB+ array will take many days on either.
 
Last edited:
recommended drive quantity based on block.

Raidz: 3, 5, 9, etc. (2+1, 4+1, etc.)
Raidz2: 4, 6, 10, etc. (2+2, etc.)
Raifz3: 5, 7, 11, etc. (2+3, etc).

HOWEVER the smallest quantity in each config is a waste of space.

mirrored VDEVs are also an option, recovery time is nearly instant, you will loose half the drives to redundancy.
 
Think I read somewhere that a vDev should contain no more than 12 drives.
correct, however that is not from some shortcomings of ZFS. that number was created based on math used and the likelyhood of a second drive failing during a resilver. as you add drives above 12 the probability goes up almost exponentially. especialy with the time it takes for a set of 12tb drives to resilver.
 
One thing I'm not clear on, if I create a zPool comprised of 4 6 drive vDevs in RAIDz2, how may parity drives will the pool have?
 
pools do not have parity, VDEVS have the parity. also there are not individual drives for parity. each VDEV could loos any 2 drives (using RAIDZ2). so out of your entire pool you could potentially loose 8 drives and still have your data, as long as no more than 2 per VDEV were lost.
 
NO!

there are not individual drives for parity! every drive will contain DATA and STRIPE info.

Thank you sir. I was under the impression that a z1 used 1 drive for parity and a z2 used 2, etc. Guess I was wrong.
 
Thank you sir. I was under the impression that a z1 used 1 drive for parity and a z2 used 2, etc. Guess I was wrong.
Having been researching this recently as well the way parity is handled was confusing at first, too. ZFS stores parity on each drive along with the data, while UnRAID on the other hand uses a dedicated drive exclusively for parity.

The confusion for myself came from when it's commonly referred to as '+ <n> drive(s) of parity' in discussions when that's shorthand for meaning that much usable free space for data is being consumed by the choice of redundancy/parity (eg: RAIDZ2 = 2 drives' worth of usable space is used for parity but not physically 2 drives).
 
correct, however that is not from some shortcomings of ZFS. that number was created based on math used and the likelyhood of a second drive failing during a resilver. as you add drives above 12 the probability goes up almost exponentially. especialy with the time it takes for a set of 12tb drives to resilver.

According to this calculation:
https://jro.io/r2c2/

21+3 is similar in reliability to 4+2.
 
In RAID 5/6 and RAID-Z1/2/3, the RAID controller (be it hardware or software) does not actually dedicate some quantity of the drives to user data and the remainder to parity data. This sort of configuration was employed in RAID 4, but having all the parity information for the whole array reside on a single disk created contention for access to that disk. The solution was to stager the parity data across all the disks in the array, in a sort of barber pole fashion. For the sake of making visualization and discussion easier, I will still refer to "data drives" and "parity drives", but this distinction is purely conceptual. I would encourage readers to familiarize themselves with the standard RAID levels before reading on.


Taken from the white paper.
 
I run a 4TB x 2x 24Drive Z3 setup for our offsite backup. It also has 12x hot spares (well, had, it's down to 9 after 18 months). The point of the overkill spares is that I don't want to have to see it in person ever again. It should be lifecycled before replacement spares are even considered.

Our on site primary is the same, with one hotspare. It has gone through five drives in the past two years.

RAM depends completely on your performance needs as long as you aren't doing deduplication. More RAM, more ARC, faster reads of recent data. ZFS will run just fine with 8GB of RAM, just you will be limited by disk IO to a large extent.
 
Back
Top