Expanding ZFS RAIDZ2

BonzTM

n00b
Joined
Jan 12, 2011
Messages
24
Hello,
I currently have a 10 drive RAIDZ2 of 2TB drives. It is a total of 14.2TB useable, and has 1TB free. I know I shouldn't have went over the 20% free rule. Assuming I add another 10 drive RAIDZ2 to the pool, will I have performance issues considering it won't re-stripe across the whole thing? Should I create a 2nd pool with the new drives and just split my data? What would you suggest?

Thanks.
 
Here is what you can do. Add the 2nd vdev. Then copy large files/dirs/whatever to temp names, delete the original, and rename. Do that enough, and you will be roughly rebalanced.
 
Here is what you can do. Add the 2nd vdev. Then copy large files/dirs/whatever to temp names, delete the original, and rename. Do that enough, and you will be roughly rebalanced.

Something easy enough to do manually, would be awesome is Gea added an "automatic re-balance" feature to Napp-It though.
 
You are right - if you add a 2nd 10 drive RaidZ2 vdev to the pool but the first one is almost full it won't re-stripe between the pools and (almost) all of your new writes will get put onto the 2nd vdev.

Will it have the same performance as 2x10 drive striped RaidZ2 pools? No.
Will it have any less performance than you had with the original RaidZ2 pool? Also no.

So the question is this - will it have enough performance to support your application? Well - if the original RaidZ2 pool gave you enough performance then the 2xRaidZ2 (unbalanced) pool will too. And assuming your application is not "write only" - that is, assuming you delete or move things occasionally - over time it will pick up the benefit of the stripe without having to do much of any work at all.

Also, not sure this is something to expect from napp-it. For the most part, napp-it provides a web interface to common single-line ZFS commands or sequences of commands without much decision making between them. What you ask here would be a really long duration job with lots of decision making and monitoring. Not really in the spirit of what napp-it seems to be.
 
Last edited:
Pretty insightful.

Yes Pig, it suits my application just fine. As long as I won't see reduced performance due to the 1st vdev being full.
I suppose if I pulled off ~2TB of data at a time to a network drive, then replace it I could get it to rebalance.

Would it be of benefit to add another 10xRAIDZ2 vdev to this current pool, or should I go ahead with 2 pools? If I replace the 2TB drives with 3TB or 4TB in the future, would I need to replace all 20 at once, or could I do 10 at a time and reap the benefit of the extra space?

Thanks guys =D
 
I wouldn't split pools unless you have different requirements for them and etc. Just add to the existing pool and either copy/rebalance or let time do it for you.
 
You can replace one single disk with 4TB disk at a time. Wait for resilver to complete, and then replace the next disk. When all the disks have been replaced, you will get new storage available.
 
You do not have to replace all the drives in the entire pool to get the extra space, you will however need to replace all the drives in a vdev before it will expand.
 
You can replace one single disk with 4TB disk at a time. Wait for resilver to complete, and then replace the next disk. When all the disks have been replaced, you will get new storage available.

You could probably replace one disk per vdev, right? So if he had 2 10disk vdev's he could do 2 at a time (one disk per vdev)
 
You do not have to replace all the drives in the entire pool to get the extra space, you will however need to replace all the drives in a vdev before it will expand.

Can you explain this a little more?
 
Basically in a RAID-Z, RAID-2Z, etc configuration the drive with the smallest space dicates the amount of size fot he RAID. For example if you have a 3 disk RAID-Z configuration with 3 500GB disks you will have 1 TB of available storage. If you replace 1 500GB drive with a 1TB drive (resilvering) you still will have 1TB of available storage. It's not till you replace all 3 drives with 1TB drives in the configuration does ZFS automatically give you 2TB of storage space. This means that until all your disks are replaced essentially you can only use the maximum of the old drive size so in this case the 1TB drive acts as a 500GB drive. Hope that clears it up for you.
 
Basically in a RAID-Z, RAID-2Z, etc configuration the drive with the smallest space dicates the amount of size fot he RAID. For example if you have a 3 disk RAID-Z configuration with 3 500GB disks you will have 1 TB of available storage. If you replace 1 500GB drive with a 1TB drive (resilvering) you still will have 1TB of available storage. It's not till you replace all 3 drives with 1TB drives in the configuration does ZFS automatically give you 2TB of storage space. This means that until all your disks are replaced essentially you can only use the maximum of the old drive size so in this case the 1TB drive acts as a 500GB drive. Hope that clears it up for you.

I understand that completely, and that's how I figured it worked. But Spazoid said that I don't need to replace every drive to get the added space in his previous post. That confused me a bit.

Also, another question I had. If I have a 10 disk vdev of 2TB drives, and a 10 disk vdev of 1TB drives; both part of the same pool. Would that work? Or do both vdevs have to be of equal size? Could I then replace all the 1TB's with 3 or 4TB's and keep the vdev of 2TBs?
 
Can you explain this a little more?

As someone stated, each vdev is independant. As you probably know, pools consist of vdevs. The vdevs do not have to be the same size, or even type (though it is recommended).

Consider the following:

Pool consists of 2 vdevs:
vdev1: 10 drives of 1 TB, RAIDZ = 9 TB
vdev2: 10 drives of 2 TB, RAIDZ = 18 TB
Poolsize = 27 TB

To upgrade this, you could simply switch each drive in vdev1 with a new and larger one, fx 4 TB drives. This would give you 10 drives of 4 TB, or 36 TB as one drive would still be used for parity. Total poolsize = 36 (vdev1) + 18 (vdev2) = 56 TB.
Remember to switch one drive at a time. A step-by-step procedure for swapping drives to increase vdev size should be easy to find.
 
Thanks guys,
Got everything up and running. Moved from my CM590 to a Norco 4220. Did an export on the pool, installed new hardware, then did an import once everything was up and running.
I appreciate all the advice.

System: Norco 4220
CPU: Athlon X4 840
RAM: 8GB DDR2-800
Mobo: Gigabyte 785GM mATX
PSU: Corsair TX650
HBA: SuperMicro AOC-USAS2-L8i (IT Firmware)
SAS Expander: Chenbro CK12803
HDDs: 10x Hitachi 5k3000 2TB; 8x WD Black 1TB & 2x WD Green 1TB (2x10 drive RAIDZ2)
 
Back
Top