zPool members

Ripley

Limp Gawd
Joined
Nov 4, 2004
Messages
240
Is the recommendation still to have zPools with 9 or less disks? I'm about to setup 24 drives and would like to do 2 RaidZ2 pools of 12 disks each.
 
I would go for 3x 8 drives in raidz2

Resilver performance is critical. And resilvering with ZFS is dog slow!
I have 5 x 2TB drives in raidz1 and have used about 50% of the pool space. Resilvering takes around 12 hours now.

Performance is good, I still get over 300MB/s read and 100MB/s write. But there are a lot of snapshots that are created and deleted every day and around 100GB of the storage has deduplication enabled.

I expect the resilver/scrub time to be even higher as times moves on. When I hit 48 hours resilver/scrub time I will copy the pool over to a new pool to "reset" the "fragmentation".

ZFS is a real beast! But it really needs the block pointer rewrite feature.
 
I would go for 3x 8 drives in raidz2

Resilver performance is critical. And resilvering with ZFS is dog slow!
I have 5 x 2TB drives in raidz1 and have used about 50% of the pool space. Resilvering takes around 12 hours now.

Performance is good, I still get over 300MB/s read and 100MB/s write. But there are a lot of snapshots that are created and deleted every day and around 100GB of the storage has deduplication enabled.

I expect the resilver/scrub time to be even higher as times moves on. When I hit 48 hours resilver/scrub time I will copy the pool over to a new pool to "reset" the "fragmentation".

ZFS is a real beast! But it really needs the block pointer rewrite feature.

If you do not have really a lot of RAM (about 4 GB+ plus about 2 GB+ per TB Data) you may discover an extreme slowdown
with deduplication. If you want to improve resilver time (and performance at all) avoid deduplication until really needed and only
with with proper hardware and use mirrored vdevs whenever possible or at least as much vdevs as possible.

Performance is not a factor of amount of disks but of number of vdevs.
 
Is the recommendation still to have zPools with 9 or less disks? I'm about to setup 24 drives and would like to do 2 RaidZ2 pools of 12 disks each.

Assume you mean VDEV?


Cos the limit of a zpool is: (quoted from Wikipedia)

256 zettabytes (278 bytes) — Maximum size of any zpool

:D
 
The downside to three 8 drive pools is that I'm losing 6 drives instead of 4. Which is why I was planning 2 pools.
 
Assume you mean VDEV?


Cos the limit of a zpool is: (quoted from Wikipedia)

256 zettabytes (278 bytes) — Maximum size of any zpool

:D

Yes I mean vdevs. My thought process was to have 2 zpools each with one 12 drive vdev so I was equating zpool to vdev in my head.
 
Why do you want two pools? One pool with 3x8-disk vdevs in raidz2 will probably perform better than two pools, each with 1x12-disk raidz2 vdev. It does depend on your application somewhat.
 
You might want to rethink this 12 devices per vdev thing. Check out this thread:

http://arstechnica.com/civis/viewtopic.php?p=20797605

In particular, check out this quote from that thread (sub.mesa)

As i understand, the performance issues with 4K disks isn't just partition alignment, but also an issue with RAID-Z's variable stripe size.

RAID-Z basically works to spread the 128KiB recordsizie upon on its data disks. That would lead to a formula like:

128KiB / (nr_of_drives - parity_drives) = maximum (default) variable stripe size

Let's do some examples:
3-disk RAID-Z = 128KiB / 2 = 64KiB = good
4-disk RAID-Z = 128KiB / 3 = ~43KiB = BAD!
5-disk RAID-Z = 128KiB / 4 = 32KiB = good
9-disk RAID-Z = 128KiB / 8 = 16KiB = good

4-disk RAID-Z2 = 128KiB / 2 = 64KiB = good
5-disk RAID-Z2 = 128KiB / 3 = ~43KiB = BAD!
6-disk RAID-Z2 = 128KiB / 4 = 32KiB = good
10-disk RAID-Z2 = 128KiB / 8 = 16KiB = good

I'm about to test this theory with the help of some people's new NAS hardware. So i should have some performance numbers soon that can verify this. Still wanted to post it here already. :)
 
You might want to rethink this 12 devices per vdev thing. Check out this thread:

http://arstechnica.com/civis/viewtopic.php?p=20797605

In particular, check out this quote from that thread (sub.mesa)

I'am not a ZFS developer and I cannot do always benchmarks to verify my meanings
but in my expectations, these 'optimal' numbers can help to build perfect balanced pools
from the beginning but mostly it does not matter against the experience that the number of vdevs
is much much more important than the number of disks in a vdev when I expect performance.

My rule is:
If I need performance, i always use 2way or 3way mirrorred vdevs, the more the better,
best build with a lot of small SSDs and resilver time of <30 min.

if I need capacity i do not care about. I use backupserver with raid-Z3 from one vdev and 16+ disks,
despite resilver time of 30h+.
 
Last edited:
I think Sun (read Oracle) themselves recommend 3 or 5 disks for z1, 6 or 10 disks for z2, and 9 disks for z3.

So with 24 drives I would personally do 2x 10 drive z2's with the remaining 4 drives as spares, or 4x 6 drive z2's with no spares.

They would both yield the same usable space, but the former would have longer resilver times and less IOPS, but does offer spares.

Of these 24 drives are you using any of them for the actuall install of your chosen OS, or are you having dedicated drives for this. In my system I used 2x 160GB 2.5" drives for a mirrored rpool, and 14 2TB 3.5" disks for the data pool (or as I named it dpool) - 2x 6 drive z2's with 2 spares, yielding ~14.5t usable. All drives hanging off 2 LSI 3081e-r controllers (as my motherboard doesn't support hot swap unless configured for RAID rather than IDE, and ZFS doesn't like any underlying RAID other than it's own).
 
Last edited:
Thanks for that info joltman, means i may have to rethink my 2x8 raidz2 vdevs. This messes up my whole drive layout though :[
Just a side note, if the math in Joltman's quote is correct then raidz3's should be assembled with 4,5,7, or 11 discs, rather than 9
 
Back
Top