Replacing a 4k drive in FreeBSD raidz

Cryptic1911

Limp Gawd
Joined
Jul 6, 2004
Messages
226
Hey guys, I setup a raidz vdev with 5x2tb drives on FreeBSD 8.2 a while back, and all has been working well except that I noticed a bunch of bad sectors in smartctl on one drive. I am rma'ing the "bad" drive before it actualy dies on me, but in the mean time, I bought a new one to replace it from Newegg so that I don't have to wait two weeks in a degraded state.

Anyways, What's the proper procedure for this? I'm wondering if I have to do anything with the "new" drive, since I did the whole gnop thing to get them all aligned to 4k when I initially setup the vdev.. Can I just plug the new drive in and do a replace? I'm not positive if it'll just figure it out on its own since the vdev was setup as 4k, or if I have to gnop it to 4k first, and then do the replace?

Thanks
 
Subscribed. I am curious to know.

In an unrelated question, how has your performance been with ZFS and 4k drives? I heard that 4k drives are slower than old drives. Thoughts?
 
its been decent, I have two zpools, one with 10x500gb drives (2x5 disk vdevs), and the new one with 5x2tb 4k disks, and they are about the same speed wise:

just did a quick dd test, and got 323MB/s write and 618MB/s read from the 2tb 4k drives, and 331MB/s write and 700MB/s read from the 10x500gb drives. Keep in mind that I have prefetching enabled, so the read numbers would be less with it disabled (if you have 4gb or less ram)
 
don't want to keep bumping this, but anyone got an answer for this one? I want to make sure I do this right and don't end up screwing up my pool
 
IIRC, when you created the zpool with the gnop devices to make zfs use 4k sectors, that initial creation will hold for the life of the zpool. In other words, as long as it was created that way, you don't need to do it again even for new drives added to the zpool vdev.
 
ZFS is smart enough to recognize the existing 4k alignment and the new drive will be added with the proper ashift. Likewise when setting up a ZFS pool with 4k drives you can gnop just the first drive and build a pool, it will apply the proper alignment to all subsequent disks.
 
You could just pull the old drive and stick the new one in the vacated slot. However, since your old drive hasn't actually failed yet, I would just plug in the new drive (without removing the old drive) and then do a 'zpool replace <old> <new>'. That way if any drives fail during the resilvering process, there will be more redundancy data available to recover the pool. That assumes ZFS is smart enough to use the old drive for redundancy until the new drive is fully online.
 
Ok, Thanks guys.. kinda what I was thinking, but I wasn't too sure. I'll try just doing a replace and see what happens
 
Ok, it worked out well (so far, its reslivering). I did a zpool offline zfsarray da13, then popped the drive out, put new drive in, ran camcontrol rescan all and checked dmesg to make sure it found /dev/da13, ran diskinfo -v da13 to verify serial# was correct, then zpool online zfsarray da13, then zpool replace zfsarray da13. Its been reslivering for about 20 mins now, will be done in like 5
 
Back
Top