ZFS grow/shrink question

halcyon

Gawd
Joined
Mar 20, 2003
Messages
717
Hi I am new to ZFS and looking at building a system using it, I have a specific question about its ability to grow shrink. Id like to run a system with 2 parity drives (Raid Z2?). I am buying the 3TB Hitachi 5k3000's before the price gets too insane. My question is say I have 5 now, which I believe should give me 9 TB of capacity with 6 TB being used on parity, can I later install 6th drive and raise capacity to 12 TB? How does the growing/shrinking work?

Thanks!
 
You cannot add drives to a vdev. So if you created a 5 drive raidz2, that is what that vdev looks like. You can always grow the pool by adding another vdev, say another 5-drive raidz2, but that sounds like overkill for you. You would need to copy the data off, destroy the pool, recreate it with the 6th drive, and copy the data back on to it.
 
Hi I am new to ZFS and looking at building a system using it, I have a specific question about its ability to grow shrink. Id like to run a system with 2 parity drives (Raid Z2?). I am buying the 3TB Hitachi 5k3000's before the price gets too insane. My question is say I have 5 now, which I believe should give me 9 TB of capacity with 6 TB being used on parity, can I later install 6th drive and raise capacity to 12 TB? How does the growing/shrinking work?

Thanks!

other option:
use a raid-10 config + hotspare instead = 6 TB capacity from 5 disks
you can increase by adding more mirrors and you have much better performance out of them
 
Thanks for the reply Gea, I considered this, however I am accessing the NAS over a 1Gb/s network link so presumably I should be bottlenecked on that either way right? And at least in the Z2 case I can lose any two drives safely and still function, but with RAID-10 if I lose two drives corresponding to the same data before the spare is brought up then I'm screwed right?
 
Thanks for the reply Gea, I considered this, however I am accessing the NAS over a 1Gb/s network link so presumably I should be bottlenecked on that either way right? And at least in the Z2 case I can lose any two drives safely and still function, but with RAID-10 if I lose two drives corresponding to the same data before the spare is brought up then I'm screwed right?

about performance:
korrect if you have a large sequential unfragmented datastream.
if you read for example lots of small data blocks, you need I/O performance.
On a Raid-10 you have 4 x the read value compared to a pool from one raid-z vdev

about reliability:
on a raid-z2 any of two disks can fail but due to much lower resilver time and
that a second disk may fail in another vdev, the real reliability is nearly the same.

high-performance pools are build from mirrored vdevs.
if your data are really important or you need superior performance a 3 way mirror is an option.

(i have build my main pools from 4 x 3 way mirrors of 'cheap' MLC SSD's used as ESXi datastore )
 
Last edited:
_gea, your 4x3 way mirror of SSDs - is that using ZFS or is that hardware RAID controller?

It is ZFS software Raid on LSI 2008 based SuperMicro onboard SAS2. controllers
( I avoid hardware raid controller on ZFS based systems)
 
Ok - I am just wondering how you got ESXi to see the datastore when it is on a ZFS filesystem. I didn't know that was possible! Is this using NFS, or can VMWare somehow see this directly? Thanks!
 
I believe Gea does the all in one approach, and that one or more filesystems on ZFS are shared back to ESXi via NFS?
 
Back
Top