Migrating my ZFS pool - come give me some pointers

Concentric

[H]ard|Gawd
Joined
Oct 15, 2007
Messages
1,028
Yo,

I'm a bit of a Unix n00b but I've been enjoying getting to know a bit about ZFS using OpenIndiana on a home server that I've been gradually upgrading - it's the Opteron one in my sig.

It's working great at the moment with a little 3x 1TB RAID-Z pool using consumer-grade Samsung SATA drives, sharing a few filesystems using SMB.

I have four 1TB Seagate Constellation ES SAS drives that I'd like to use to replace the pool in this machine - they have to go in this rig because it's the only one I have that supports SAS. The SATA drives can then be repurposed (probably a second server later to replicate this one).


The situation is that I need to transfer the existing zpool and all its filesystems on the SATA drives to a new pool on the SAS drives, with as little effort and reconfiguration as possible. I'd rather not have to start from scratch and manually copy everything.

Since it's just a home server, up-time is not critical. But it would be nice, for example, if I could keep all the current SMB shares and the permissions config that is on my existing pool, because it took me a lot to get it to work as it is now and I don't fancy doing all that again.

I've been doing a bit of research and playing around a bit with ZFS Send and Receive but ran into a few errors and would rather wait until I understand what I'm doing than go off and break my existing pool.
(I have all the data on the existing pool backed up, but I'd still rather not destroy it if I can help it!)

Has anyone gone through this procedure before and can guide me through it?
Am I on the right lines with Send and Receive?
Will that preserve all the pool settings like shares and permissions?
Do I take a snapshot of the whole pool or do I have to Send/Recv each filesystem?
Is there anything to set up on the new drives before Send/Recving, apart from just creating a new pool? For example, I came across some problems with having permission to Receive on the new pool?
Any and all advice/info welcome!

Oh and because everyone loves pics, here's one :D:
IMG_0322.jpg
 
Yo,

I'm a bit of a Unix n00b but I've been enjoying getting to know a bit about ZFS using OpenIndiana on a home server that I've been gradually upgrading - it's the Opteron one in my sig.

It's working great at the moment with a little 3x 1TB RAID-Z pool using consumer-grade Samsung SATA drives, sharing a few filesystems using SMB.

I have four 1TB Seagate Constellation ES SAS drives that I'd like to use to replace the pool in this machine - they have to go in this rig because it's the only one I have that supports SAS. The SATA drives can then be repurposed (probably a second server later to replicate this one).


The situation is that I need to transfer the existing zpool and all its filesystems on the SATA drives to a new pool on the SAS drives, with as little effort and reconfiguration as possible. I'd rather not have to start from scratch and manually copy everything.

Since it's just a home server, up-time is not critical. But it would be nice, for example, if I could keep all the current SMB shares and the permissions config that is on my existing pool, because it took me a lot to get it to work as it is now and I don't fancy doing all that again.

I've been doing a bit of research and playing around a bit with ZFS Send and Receive but ran into a few errors and would rather wait until I understand what I'm doing than go off and break my existing pool.
(I have all the data on the existing pool backed up, but I'd still rather not destroy it if I can help it!)

Has anyone gone through this procedure before and can guide me through it?
Am I on the right lines with Send and Receive?
Will that preserve all the pool settings like shares and permissions?
Do I take a snapshot of the whole pool or do I have to Send/Recv each filesystem?
Is there anything to set up on the new drives before Send/Recving, apart from just creating a new pool? For example, I came across some problems with having permission to Receive on the new pool?
Any and all advice/info welcome!

Oh and because everyone loves pics, here's one :D:
IMG_0322.jpg

your options:
you can do a 1:1 disk replace (SATA) to (SAS) with a resilver after each replace without the need of a data transfer

or
you can create a new pool and transfer all filesystems with ZFS send.
Because you are using OI and the Solaris CIFS server, all shares, ACL, Windows SID's and permissions
are on your pool and are transferred as well. So it should be easy.

You may transfer the whole pool with a recursive ZFS send or you may transfer one filesystem by another as well.
 
Why don't you just pull from backup?
As I mentioned, I'd quite like to preserve all the existing pool and filesystem settings rather than having to set up everything from scratch. Plus, I'd like to know how to migrate pools regardless of the scenario because I'd like to become more proficient and knowledgeable about these things.

your options:
you can do a 1:1 disk replace (SATA) to (SAS) with a resilver after each replace without the need of a data transfer

or
you can create a new pool and transfer all filesystems with ZFS send.
Because you are using OI and the Solaris CIFS server, all shares, ACL, Windows SID's and permissions
are on your pool and are transferred as well. So it should be easy.

You may transfer the whole pool with a recursive ZFS send or you may transfer one filesystem by another as well.

Thanks Gea.
I don't think I could do a 1-by-1 replace and resilver in this case because I want to go from a 3-drive pool to 4-drive pool. Correct me if I'm wrong..?

Good to know the settings would be preserved with a Send/Receive. But I'm intrigued - you say "Because you are using OI and the Solaris CIFS server.." - so is it not the same on other OSes? Say if I were using FreeBSD, the settings are not part of the pool?

Sounds like Send/Receive is the way to go, now to work out the commands.
I'm at the point of sending but get this error:
Code:
$ zfs send -R pool@snapshot | zfs receive -dF newpool
cannot mount 'newpool': Insufficient privileges
 
As I mentioned, I'd quite like to preserve all the existing pool and filesystem settings rather than having to set up everything from scratch. Plus, I'd like to know how to migrate pools regardless of the scenario because I'd like to become more proficient and knowledgeable about these things.
Gotcha. But you shouldn't lose your settings by pulling from backup. Either way send/recieve will will maintain everything. I was just wondering if you had a backup present.
 
Last edited:
Maybe I'm missing something, This way, you copy a compressed stream to the new pool, then uncompress it as you receive it. How is that an improvement for a straight disk => disk replication? At the end of the day, disk I/O will be the limiting factor, no?
 
Do you get to say, "gzcat"? :D


srsly - Bundling and streaming recursive snaps as a single tarball may have a lot of benefits for a particular situation, like moving an uncompressed rpool by archiving/recovering a BE with snaps via a NFS backing share.

Plus, it's good to know ZFS allows us to snap/pipe/compress/send/receive/decompress with that little bit of code (and a little time). :cool:
 
For one thing, he isn't migrating his rpool, but the data pool. Also, he was asking how to migrate data from one set of local disks to another set, so your introducing things like BE's and rpool and such isn't germane. You still haven't explained how doing the same set of disk I/O twice is an advantage - maybe I am being dense. Your way sends a stream which is compressed onto a file on the old pool. You then have to read that data back again to send it to the new pool.
 
For one thing, he isn't migrating his rpool, but the data pool. Also, he was asking how to migrate data from one set of local disks to another set, so your introducing things like BE's and rpool and such isn't germane. You still haven't explained how doing the same set of disk I/O twice is an advantage - maybe I am being dense. Your way sends a stream which is compressed onto a file on the old pool. You then have to read that data back again to send it to the new pool.

:p Sorry paret but I'm tempted to agree with dan here, I don't see the purpose of compressing the data and then immediately decompressing it, especially when it's two sets of local disks? But yea, "gzcat" does have a certain ring to it :D

Your suggested commands seem to be doing the same thing that I mentioned in my last post, but as I said I'm getting the "Insufficient privileges" error? I've "zfs allow"-ed my user on the machine to do everything (like mount, send, receive, etc). Are there some other sort of permissions that I need to set? :confused:
 
:p Sorry paret but I'm tempted to agree with dan here, I don't see the purpose of compressing the data and then immediately decompressing it, especially when it's two sets of local disks? But yea, "gzcat" does have a certain ring to it :D

Your suggested commands seem to be doing the same thing that I mentioned in my last post, but as I said I'm getting the "Insufficient privileges" error? I've "zfs allow"-ed my user on the machine to do everything (like mount, send, receive, etc). Are there some other sort of permissions that I need to set? :confused:


Well, I think I remember noticing the "$" prompt instead of "#" in that bit.I usually su - to root before I commence almost anything like this... Another of the benefits of sending an archive is that permissions are pretty straightforward because you're only sending one file (and you can send it straight to a /dev/c2t2d2) :)
 
Last edited:
Well, I think I remember noticing the "$" prompt instead of "#" in that bit.I usually su - to root before I commence almost anything like this... Another of the benefits of sending an archive is that permissions are pretty straightforward because you're only sending one file (and you can send it straight to a /dev/c2t2d2) :)

FFS :D It's always a simple mistake. I thought I was giving myself enough permission by sudo-ing my way around here and there but obviously not. Just tried doing it as root and the transfer is underway!
250gigs into about 1TB to transfer and the iostats are looking alright to me! - 95MB/s read on the old pool and 130MB/s write on the new.
 
Transfer of approx 710GB (not 1TB :p) is complete, so it took, what, just over an hour...? Isn't that nearly 200MB/s? The iostat averages (which measure since boot, which wasn't more than an hour before) came to 130MB/s read on the old pool and 142MB/s write on the new. Not bad.

Just had to 'zfs set sharesmb=on' the filesystems on the new pool and it works straight away! Very satisfying.
 
Back
Top