160GB HDDs in ZFS pool, slowly replaced by 4TB drives?

damarious25

Limp Gawd
Joined
Dec 27, 2010
Messages
227
Started a collection of 160GB HDDs from random manufacturers.
The goal is the achieve 20 TB of space in a ZFS pool (RaidZ or Z2) for a media server but I'm poor and can't afford to buy all HDDs at once.
I know I can't "add" HDDs to a pool, but I can replace HDDs with higher capacity drives as far as I know? So is it a problem to build a ZFS media server with 160GB disks, have it operational, and slowly replace all drives with higher capacity 3 (or 4) TB disks? Are there risks?

I plan on having the OS on 2 SSDs in a mirrored Vdev. I also read about people adding a SSD in front of ZFS pools to act like a cache? Is this useful for a media server? I mean, if you always access different media then trying to store "frequently accessed files" seems kind of pointless to me. And if it acts like a buffer, I also fail to see the point for me as I'm not concerned about write speeds overall.

This will be a dedicated media server box.
 
Last edited:
A read cache (l2arc) is basically useless for a media server, correct. For your case, I'd be leery of replacing drives that small with 4TB drives, as things will get very unbalanced. How many 160GB drives do you have right now?
 
That's fine. There are two risks:

-If you only had 2-disk mirrors or RAID-Z vdevs, you will lose almost all protection on the array while it rebuilds. If you can use RAID-Z2, then you'll still have room for one failure during the rebuild.

-If you had silent corruption on one of your disks and didn't know it, your data could be further corrupted during the rebuild. Doing a zpool scrub first to verify no corruption will help mitigate this risk.

As dans said, no need for l2arc for media.
 
You can do that, but it won't work like you think - the pool won't get any bigger until all drives in the vdev are upgraded to the larger size. So say you have a 5 device vdev, it will resilver fine each time you swap a drive, but it won't expand to the full size until you replace all five drives.
 
You can do that, but it won't work like you think - the pool won't get any bigger until all drives in the vdev are upgraded to the larger size. So say you have a 5 device vdev, it will resilver fine each time you swap a drive, but it won't expand to the full size until you replace all five drives.

^ True. I suppose it wasn't clear if OP understood that or not and I just assumed he did.

To make upgrading easiest, you could do 2-drive mirror vdevs, which means you'd only need to upgrade 2 drives at a time, but that gives you a 50% efficiency and leaves a vdev open to failure during a rebuild (though you MIGHT be able to take the other drive you'd pulled and rebuild from that - not sure, honestly, if that would work or if by that point you'd have a bunch of corruption that might be difficult to fix). Larger groups with RAID-Z or Z2 could give you more space efficiency (and with Z2, better protection especially during a rebuild) but will need more drives replaced at a time to upgrade capacity of the pool.
 
That's fine. There are two risks:

-If you only had 2-disk mirrors or RAID-Z vdevs, you will lose almost all protection on the array while it rebuilds. If you can use RAID-Z2, then you'll still have room for one failure during the rebuild.

-If you had silent corruption on one of your disks and didn't know it, your data could be further corrupted during the rebuild. Doing a zpool scrub first to verify no corruption will help mitigate this risk.

-RAID-Z2 for the OS! Didn't think of that. Thanks!

-So if the 160GB disks swapped to 3(or 4)TB disks is possible, running scheduled zpool scrubs is recommended? Or is a scrub something you do with hardware changes like one when the initial 160GB disk pool is built, and then again each time a disk is swapped?

You can do that, but it won't work like you think - the pool won't get any bigger until all drives in the vdev are upgraded to the larger size. So say you have a 5 device vdev, it will resilver fine each time you swap a drive, but it won't expand to the full size until you replace all five drives.
Oooo. I didn't know that. Financially though, this might be my only option and it's not a critical build so I think I could live with that. But it will affect my decision on a RAID-Z or RAID-Z2 storage pool.

Also, will there be unbalanced issues as mentioned by danswartz? I can certainly see a system being picky over such large changes.
 
-RAID-Z2 for the OS! Didn't think of that. Thanks!

-So if the 160GB disks swapped to 3(or 4)TB disks is possible, running scheduled zpool scrubs is recommended? Or is a scrub something you do with hardware changes like one when the initial 160GB disk pool is built, and then again each time a disk is swapped?

What OS, exactly? Depending on which, you may have no or partial support for ZFS as root. A fair amount of people run their root filesystem on a hardware mirror and then use RAID-Z/Z2 for their data pool(s).
 
I would spare myself of that process. You will most likely have a pool with 512 byte blocks, which will be sub-optimal for new disks. And you only get the benefit of the larger disks once you replace the last disk.

Just create a new pool and move all datasets over with 'zfs send/receive'. This has the added benefit of defragmenting the filesystem to some extend. You can even reduce the downtime by transfering a a new snapshot first and then unmount and synchronize the pools by transfering only the changes, but even a few hours downtime should be okay for a home media server.
 
What OS, exactly? Depending on which, you may have no or partial support for ZFS as root. A fair amount of people run their root filesystem on a hardware mirror and then use RAID-Z/Z2 for their data pool(s).

I was thinking OpenIndiana or FreeBSD. I've used command line FreeBSD in the past for various things but nothing that I would have placed this much importance on.

I would spare myself of that process. You will most likely have a pool with 512 byte blocks, which will be sub-optimal for new disks. And you only get the benefit of the larger disks once you replace the last disk.

Just create a new pool and move all datasets over with 'zfs send/receive'. This has the added benefit of defragmenting the filesystem to some extend. You can even reduce the downtime by transfering a a new snapshot first and then unmount and synchronize the pools by transfering only the changes, but even a few hours downtime should be okay for a home media server.

Kinda not sure I follow? Wouldn't a 'zfs send/receive' already require a build pool? I've never used ZFS so sorry if I'm asking dumb questions. And you are right about downtime. It's not a big issue.

Basically, I have TB's of data in JBOD spread across old machines. I need this changed. I desperately want to bring everything together for reliability (Raid-Z), convenience, increased storage, and energy savings. The PC's I have now are old and having them running is adding about $30/40 a month. Seriously. I have acess to a copy of Windows server 2008 as well but I don't need to tell anyone here about cost savings when looking at a hardware RAID card vs a sata controller.
 
Try pc-bsd. Has all the latest open source zfs stuff including feature flags. Big bonus: comes with opensolaris style boot environments, so you can back out changes that went wrong. Also supports mirrored OS disks without any hacks...
 
Kinda not sure I follow? Wouldn't a 'zfs send/receive' already require a build pool? I've never used ZFS so sorry if I'm asking dumb questions. And you are right about downtime. It's not a big issue.
Yes, you need a new pool. What I basically meant is that there is no elegant way to slowly grow an existing RIADZ/2/3 pool by adding a drive now and then. Either you replace all drives at once or you build a new pool. You can also add a new vdev, but to keep the redundancy it has also to be a RAIDZ/2/3, so the number of drives you need for parity increases with each vdev. I would also rather build a new, separate pool for a media server, than to add a new vdev with the same disks. That way you can always separate the pools, which you can't with multiple vdevs.

If you absolutely want a solution that can be extended by single drives, you can look at SnapRAID. It is in my opinion a better solution for exclusively media files.
 
Try pc-bsd. Has all the latest open source zfs stuff including feature flags. Big bonus: comes with opensolaris style boot environments, so you can back out changes that went wrong. Also supports mirrored OS disks without any hacks...

Thanks for the suggestion. I'll look into it. If I do go with a BSD I'd lean towards FreeBSD because of familiarity and the community support.
 
pc-bsd IS freebsd. just a few bells and whistles on top. As far as I can tell, all the admin stuff works the same way as stock freebsd.
 
pc-bsd IS freebsd. just a few bells and whistles on top. As far as I can tell, all the admin stuff works the same way as stock freebsd.

Great to know. Thanks. Maybe things have changed but in the past when I needed a Unix based OS, I tried a number of BSD variants. And sub-variants. I found the most stable was FreeBSD.
 
Back
Top