ZFS - Mixing 4K/512 Drives...

jonnyjl

Limp Gawd
Joined
Apr 12, 2009
Messages
195
I just want some confirmation that this is not a big deal. I understand I'll see a performance penalty since the pool doesn't have the correct ashift, but other than that is there any big issues?

I'm just hesitant since I'm going to spend close to 2K on replacing 11 drives. Ideally I'd rather build a new pool and migrate. I've done it before, but right now that isn't feasible.

FYI, I'll be first mixing 4K drives within a vdev (raidz2) but eventually that vdev will be all 4k drives and only the pool will consist of 1 vdev with 512 drives and this 4k one
 
Last edited:
Greetings

As I understand it

(a) you can have a pool consisting of say a Raid-Z2 512(native) drives mixed together with a Raid-Z2 with 4K/512e (emulated 512) with no problems, also because they are 512 emulated they will replace 512 native drives if required. The only downside I may see is (I don't know but I presume) that ZFS will align on 4KB boundaries much like say Windows 7 will (but Windows XP doesn't as it starts the partition on Sector 63 which is not divisible by 4KB). Another complication is apparently depending on the ZFS implementation some 4KB/512e drives are reported to ZFS as 4KB native drives and won't be accepted into ashift=9 pools and this is due to the physical sector size of 4KB also being taken to be the logical sector size of 4KB as well.

(b) Native 4KB drives will not work as replacements for Ashift=9 pool drives, these are few and far between as the only ones I know of are like expensive dual ported SAS at 15K RPM which nobody is going to accidentally try to use in a ZFS pool unless they know what they are doing, and if your buying those drives I would expect you would be. Most drives will be 512e as Windows 7 does not understand them whereas Windows 8 does, since Windows 7 doesn't exit extended support until 2020? I don't expect this situation to change anytime soon before that date and probably for quite a while after then as well.

(c) I've read that ZFS was originally designed for any sector size but the metadata is currently optimized for 512 byte sectors so I infer from that there is some wasted space in 4KB metadata sectors, how much this is and whether it's significant I have no idea.

(d) If you do have unaligned 512e not on 4KB boundaries its easy to spot as there was a thread here about 6-12 months ago where the pool benchmarked to something like 500 MB's reads but only like 100 MB's writes, after dropping the pool and recreating it the numbers for the writes went back up close to the 500 MB's mark, I can't remember what the circumstances were so you'll have to look up the thread. I guess if this happens to you and you add it to your existing pool you will be stuck with it unless you destroy the entire pool.

(e) you probably won't notice the differences anyway if your default sectorsize is 128KB so if you have 8 data drives in your Raid-Z2 array the data is written in 16KB chunks on each drive regardless of whether it is 512n or 4KB/512e or 4KBn.

I think this info is correct but see if you can get some other people confirming everything I've said.

Hope this helps

Cheers
 
If your pool was created using ashift=9, you won't be able to add any drives that are detected as 4k. I recently had a client that had a server built using 512 drives. After 2 years of working fine a drive inevitably died. The version of ZFS at the time would not allow me to add 4k replacement drives to the pool because it was created ashift=9. I ended up having to back everything up and re-build the pool using ashift=12. If there is a better way to deal with this I would love to know, I know here are other servers that I've deployed that I will have the same issue with.
 
If your pool was created using ashift=9, you won't be able to add any drives that are detected as 4k. I recently had a client that had a server built using 512 drives. After 2 years of working fine a drive inevitably died. The version of ZFS at the time would not allow me to add 4k replacement drives to the pool because it was created ashift=9. I ended up having to back everything up and re-build the pool using ashift=12. If there is a better way to deal with this I would love to know, I know here are other servers that I've deployed that I will have the same issue with.
Hmm.

Sigh, can anyone recommend a cheap HBA with external ports?
 
You could just get a M1015 and get a bracket to convert it to external ports. That's probably the cheapest way to go.
 
I have the Dell 6Gbps SAS HBA, it is also cheap and holds two external ports. Remember that you can not flash it with normal LSI2008 BIOS. There is a thread in the www.servethehome.com forum I have created on this.
 
Thanks guys. I do remember I have a SAS internal to external bracket left over from my hardware (desktop) days.

I'll probably end up going that route and building a new pool in an old external SAS box (... have to find drive tray keys!) and migrating then moving the disks back into the internal chassis.
 
Back
Top