4k drives and x86 OpenSolaris clones : a workaround.

Joined
Jan 4, 2011
Messages
13
[EDIT]

in fact it was a much more complicated way to achieve what was done here so the post is swept clean
 
Last edited:
I wonder if this will fix the problem I have of missing 600 GB of space when I run the original zpool ASHIFT12 binary on a 10 drive RAIDZ2 pool.
 
Didn't noticed such a behaviour using Grant Pannell's binary on my 9 disk RAIDZ, or the original files, or my files. Each disk is supposed to be 2TB but its actually 2*10^12bytes, or 1.82TB. My raidz shows 14.4TB free when empty, so it's pretty much correct.

If you can afford to move your data let me know how it turns out. I'm filling my raidz right now, will know how it adds up within 48 hours.
 
I have 10x 2 TB Samsung HD204UI drives. When I use the old zpool to make a RAIDZ2, I would get the expected amount. But when I use zpool-12 I would lose an additional 600 GB (using zfs list on the empty pool).

I did do some experiments - less than 8 drives in RAIDZ2 and the difference is minimal. But going over 8 drives RAIDZ2 and the data loss is much more pronounced.

Unfortunately I cannot test your tool properly by recreating the zpool as I have lots of data on it with no way of moving things around.

But maybe replacing the .so files and such could correct the free space anyways, if it is a reporting issue only?
 
This is interesting. My new nas hardware should be in this week, including 3xF4's. I'll have to contain myself and do some experimenting before committing. I think it's made into more of an issue than it is. For a storage box, anything past what it takes to max out your network is lost anyways.
 
Uh - for me this is loss of space, not loss of bandwidth!

Instead of getting 13.2 TB of free space (in ZFS filesystem, not zpool) I get 12.6 TB. 10x2TB drives in RAIDZ2. If I use normal zpool with ASHIFT 9 it is 13.2 TB. With the modified zpool with ASHIFT 12 I only get 12.6 TB of space available. This is real reported space from the ZFS filesystem, not the aggregate pool before parity.

But if you're talking about bandwidth and you only use it for networked files, you won't see much of a difference.
 
Did a quick test using VmWare. Added ten 2000GB drives to the VM and created a raidz2.

The vanilla system gives me 15.3T free.
zpool-12 gives me 14.6T free.
My modified files give me 14.6T free, and the zdb output is EXACTLY the same as the zpool-12 one, proving that my modifications were pointless, but at least now I have a higher confidence in the zpool-12 binary. The panic I had with zpool-12 was a legacy from previous hardware issues.
 
Heh your results are exactly the same as mine comparing unmodded zpool with zpool-12.

Thanks for confirming!
 
Back
Top