Optimal raidz vdev sizes for 24 spindles?

Zarathustra[H];1041824129 said:
Interesting. That makes absolutely no sense to me. Fiber SHOULD be inherently higher latency due to needing a transducer on each side... The act of changing the signal from one form of energy to another is always going to add some delay.

They must have REALLY botched the 10GBaseT implementation in that case.

The higher power use I understand, and expected, but didn't think it would be enough to be significant compared to the rest of the server (especially considering how many spinning HD's we are talking about)

Ahh. In that analysis it looks like it would be less of an issue in my application.

I'd use the Xeon-D board for bare metal ZFS storage right next to my switch and ym ESXi server. It would have a 2ft long cable in each of the 10GBaseT ports, one to the switch, and one to the ESXI server, minimizing both latency and wattage due to the short cable length.

I mean, we'd be talking about 0.7 microseconds vs 2 microseconds, which just doesn't seem like a significant difference, unless you have many cables you need to transcend and they add up, which wouldn't be the case for me.

Either way, the scenario above is partially fictional, as I neither have a Xeon-D or a 10Gig capable switch yet :p
 
The sub.mesa post have been proven to be completely irrelevant and you don't have to worry about it, I would rather take num drives guidance from this blog post

4k configuration will give you better performance, but nothing to talk to much about if you're not doing and aiming for optimising for performance and not security and space.
 
The sub.mesa post have been proven to be completely irrelevant and you don't have to worry about it, I would rather take num drives guidance from this blog post

4k configuration will give you better performance, but nothing to talk to much about if you're not doing and aiming for optimising for performance and not security and space.

4k is a must for current HD generation..
True... 4k give better performance due on correct allignment..
 
Zarathustra[H];1041821743 said:
Nice, Yeah, I was going to say, you might want to increase the RAM a bit.

I know the oft thrown about 1GB of RAM per TB of disk space for ZFS is an extreme worst case scenario, but I don't think I'd want to do 96TB on 32GB of RAM.

I've been eyeballing those Xeon D boards as well. If it werent for the fact that I'd have to spend so much money on rebuying DDR4 ram, I'd probably have one already.
1GB RAM per TB disk space is only recommended if you use ZFS deduplication. But ZFS deduplication is broken as of now, so avoid that.

If you are not using ZFS deduplication, 2-4GB RAM in a server is fine. The only issue is that ZFS has a very fast disk cache and if you have less RAM, you dont get a large disk cache. But that is not a problem if you have a media server, because disk cache is only usable when you revisit a file many times and can cache the file in RAM speeding up the process. But if you are streaming media, it is not probable that many people are watching the same movie at the same time. Media will not benefit from a large disk cache. So, for a media server, I would say max 8 GB RAM. Probably even less.
https://en.wikipedia.org/wiki/ZFS#ZFS_cache:_ARC_.28L1.29.2C_L2ARC.2C_ZIL
"...There are claims that ZFS servers must have huge amounts of RAM, but that is not true. It is a misinterpretation of the desire to have large ARC disk caches..."
 
largeblocks is supported in ZoL. I've been using it on Debian for a couple months now. It was committed back in May.
https://github.com/zfsonlinux/zfs/commit/f1512ee61e2f22186ac16481a09d86112b2d6788

I think it's at the very least worth enabling for your datasets that store big files, especially if you are using RAIDZ2 and aren't using 6-disks wide vdevs as you will definitely gain some capacity.

https://web.archive.org/web/2014040...s.org/ritk/zfs-4k-aligned-space-overhead.html

I just created a new pool with ZoL 0.6.5.3. I'm not 100% sure, but the correct way to utilize this feature is to create the pool with:

Code:
zpool create tank -o ashift=12 -O recordsize=1M raidz2

Is that correct?
Code:
zfs get recordsize tank
is returning 1M for me after creation.

Also, should I expect to gain some capacity with 2x 6 disk raidz2 vdevs in my pool with this configuration, or is it a waste of time?
 
Back
Top