I'm about to create a ZFS server and apparently in this day and age with multi-TB disks you need to use ashift=12 to properly support modern 4K drives.
I have been reading that some people have problems with wasted space with ashift=12.
First off, I will always follow the power of 2 rule.
I will be doing 6x2TB RAID2Z vdev and a 6x3TB RAID2Z vdev in my zpool.
But since my server is a media server and practically ALL my files are larger than 1MB each, will I have much wasted space due to ashift=12? My knowledge tells me that I won't and it only really affects thousands of small files that are under 4K each, but I wanted to hear from you guys that understand this better.
I also only have about 100,000 files on my 12TB of data (which I believe is pretty low for that much data) so I would believe metadata will be small as well (400MB if each file has a 4kb block worth of metadata).
My second question is about compression. Does this help offset some of the loss due to ashift=12?
From what I understand, it is a good idea to use in most cases.
I really only care about getting as close as I can to saturating my gigabit NIC for my ZFS performance, I never need faster. But as long as I can get a solid ~75MB/s or so I would be happy.
My hardware is quite fast. A 4 core 8 thread 3.3GHz Xeon Haswell and 16GB RAM. I do not mind the increased CPU overhead of compression at all if it saves me space.
So do you think I should use compression given my situation?
If yes, which algorithm do I use?
It looks like LZJB is more widely supported, but I see ZoL also supports LZ4 which is newer and much better. Are there other options beyond this?
Thanks
I have been reading that some people have problems with wasted space with ashift=12.
First off, I will always follow the power of 2 rule.
I will be doing 6x2TB RAID2Z vdev and a 6x3TB RAID2Z vdev in my zpool.
But since my server is a media server and practically ALL my files are larger than 1MB each, will I have much wasted space due to ashift=12? My knowledge tells me that I won't and it only really affects thousands of small files that are under 4K each, but I wanted to hear from you guys that understand this better.
I also only have about 100,000 files on my 12TB of data (which I believe is pretty low for that much data) so I would believe metadata will be small as well (400MB if each file has a 4kb block worth of metadata).
My second question is about compression. Does this help offset some of the loss due to ashift=12?
From what I understand, it is a good idea to use in most cases.
I really only care about getting as close as I can to saturating my gigabit NIC for my ZFS performance, I never need faster. But as long as I can get a solid ~75MB/s or so I would be happy.
My hardware is quite fast. A 4 core 8 thread 3.3GHz Xeon Haswell and 16GB RAM. I do not mind the increased CPU overhead of compression at all if it saves me space.
So do you think I should use compression given my situation?
If yes, which algorithm do I use?
It looks like LZJB is more widely supported, but I see ZoL also supports LZ4 which is newer and much better. Are there other options beyond this?
Thanks
Last edited: