ZFS build check & hardware RAID

Interesting, thanks!

I made my choice for three separate configs without virtualization (windows desktop only, small zfs 24/7 nas (10*Z2) and larger zfs storage (24drives))
I read zfs need (for best performance) 1GB RAM for each TB data even without dedupe, only for metadata.
So i'll go for lga2011 (power consumption is equivalent to 1155's ivy bridge at idle so it should be good).

@_Gea:
Sounds good, but what's better between 2*10 Z2 and 2*12 Z2 ? (optimal number of drives vs more drives, where's the tradeoff threshold?)
I won't need hotspare as i already planned to buy coldspares and can swap them myself. Also having 4 wasted slots annoy me a little.

So i'll also need 2*ZIL and 2*L2ARC if 2 pools ?

About the boot, is there a benefit of using a ssd instead of booting from usb stick ? (boottime should be longer, but after that?)

Also i will use 5K4000 hdds, already have 6 of them :)

With 10 disks and a new Raid-Z2 you have perfectly balanced pools (all writes hits all disks) where with 12 disks,
some disks are not used. But in the average, a 12 disk Raid-Z2 is faster in my experience.

If you use 2 pools:
You need 2 ARC and 2 ZIL - but only if you need them:
- with enough RAM an additional ARC SSD won't help too much
- Only with sync writes (ex NFS+ESXi) a ZIL is used, CIFS for ex. does not use ZIL

- After boot, you have only minimal writes when atime is disabled (mainly some logs).
- Reliability and boot time with mirrorred fast USB sticks (doubles read performance) is really good and short

And there is no need for 1 GB per TB data for functionality only for performance because the extra RAM is used to
cache date and deliver reads from RAM. But with Multi-TB you should use at least 8 GB RAM.
 
About RAM requirements, i read here:
#1: Add Enough RAM

A small amount of data on your disks is spent for storing ZFS metadata. This is the data that ZFS needs, so it knows where your actual data is. In a way, this is the roadmap that ZFS needs to find its way through your disks and the data structures there.

If your server doesn't have enough RAM to store metadata, then it will need to issue extra metadata read IOs for every data read IO to figure out where your data actually is on disk. This is slower than necessary, and you really want to avoid that. If you're really short on RAM, this could have a massive impact!

How much RAM do you need? As a rough rule of thumb, divide the size of your total storage by 1000, then add 1 GB so the OS has some extra RAM of its own to breathe. This means for every TB of data, you'll want at least 1GB of RAM for caching ZFS metadata, in addition to one GB for the OS to feel comfortable in.

Having enough RAM will benefit all of your reads, no matter if they're random or sequential, just because they'll be easier for ZFS to find on your disks, so make sure you have at least n/1000 + 1 GB of RAM, where n is the number of GB in your storage pool.

wut?

Also what's with this "too much ram issue" ?


About ZIL, so it's useless in a classic nas/fileserver, without ESXi ?
So if i understand i'll better go with asynch write ? (servers will be behind UPS)

About L2ARC, is it that useless ? I thought it would be nice to cache the most frequently used files into some fast storage to help the slower spinning drives with ssd IOPS, am i wrong?
What do you call enough ram ?

About fans, i wonder if three GentleTyphoon AP-12 (800RPM) installed in the RSV-L4000 fanwall would be enough to cool 10*5K4000 (6.4W max each stored into two CSE-M35T) ?
The GT all seems to have decent static pressure, but i don't know if only 800RPM is enough.

About UPS, the 5PX 1500VA seems very interesting, but as i want something quiet i noticed it was rated <45dB, but after searching for reviews and info, 45dB is only the maximum fan speed, when on battery, or when charging, or when high load.
So it should be quieter in normal operation, but does anyone know more precisely the noise level in everyday use?
 
L2ARC is *THE* killer addition to RAID-Z pools. RAID-Z due to its design has very low random IOps. But when combined with L2ARC you can soften the rough edges and get sequential performance from your harddrives while getting random read performance from your SSD.

SSDs as L2ARC can function to cache both data and metadata, but caching only the metadata is already very useful. Just consider the fact that L2ARC has to be rebuilt on reboot. So it only works effectively when your server is running 24/7 so it can build a profile of frequently requested data that should be cached.
 
About PSU, is the seasonic P-660/P-760/P-860 sufficient ?
Because the 5K4000 draw 1.2A on the 5V at startup which mean 28.8A for 24*drives, more than the rated 25A of the PSU...
On the other hand theses quality PSU can easily be overcharged a little for a short time.
 
hitachi_deskstar_5k4000_4tb_power_values.png

http://www.storagereview.com/hitachi_deskstar_5k4000_review

2-3W for the 5V line seems more like realistic. Meaning you need about 14.4A for 24 drives, not 28.8A. And yes, power supplies can provide more power than they are rated for. The upper limit is the level at which the overcurrent protection kicks in.
 
I was thinking that because the startup power requirements from the datasheet were 1.2A on the 5V + 1.5A on the 12V.
The seasonic offer 25A on the 5V (and 125W on the 5V + 3.3V) and 55A on the 12V for the P-660.
 
Isn't there a way to prevent L2ARC to be erased and rebuilt on each shutdown ?
 
Cool!
But is the last compatible version still v28?

from the link:
L2ARC devices can be shared across different pools.
Does this mean i can use only one L2ARC for two pools ?

Also about boot device, i can't decide between a SSD (looks like overkill if the OS remains in RAM after boot) or some USB2 stick (much more slower).

What do you think of the Sandisk cruzer facet ?
About size, does 16GB is enough ? (Because like hdds they are more 15.4GB than 16GB)
 
Last compatible version:
Pool V. 28 and ZFS V.5

L2Arc is like a vdev and per pool.
You can use several SSD or partition one SSD

If you use a lightweight OS like OmniOS, a 16 GB USB stick is perfect.
But use a modern fast one (prefer a good USB3 MLC or SLC) and ZFS mirror two sticks
for best performance and reliability - and disable atime
 
Back
Top