ZFS L2ARC sizing

AUPhil

n00b
Joined
Nov 28, 2012
Messages
11
I was wondering if anyone could comment from experience on sizing L2ARC ssd caches.

Given: Server memory is 256GB
Pool size: Hundreds of TB (most is archival - rarely used)
Typical Data Frequently Accessed: Oracle SMB serving standard Windows CIFS home directories (some Linux, but it's 1 Linux user to ~50 Windows users)

I was thinking 1x-3x system RAM which would put me around 400-800GB.

My interconnect is SAS2, so I'm thinking of 2x the Hitachi SSD800MH in 400GB size.

Would anyone recommend 3x-4x the Hitachi SSD800MH in 200GB size for extra iops?

Thanks,
AUPhil
 
I'm not sure l2arc is as important when it comes to choosing a specific device as an SLOG device. I would definitely recommend multiple smaller l2arc devices, since they are used round-robin.
 
Thanks, we have a ZeusRAM mirror (8GB) for write logging already.

My concern was not eating up too much ARC (memory) used for L2ARC addressing. I was wondering what the sweet spot of L2ARC sizing would be in GB.
 
Ah, okay. As I recall, it's a pretty small percentage (depending on record size and all.) I think 5% is the most it will ever go. e.g. if you have 400GB of l2arc, the memory requirement for that shouldn't go much above 20GB of ARC. Basically, unless the ratio is outrageous (like 400GB of l2arc and 8GB arc) you should be fine.
 
I think I read somewhere _Gea saying that 256GB is too much memory.
 
Lets say that your average block size is 32kb, then you would need around 8MB per GB, or 8GB per TB.

With 256GB ram, you can then size your L2ARC cache up to 32TB. Most likely you would spend some of your memory on other things, so anything less 30TB should be safe.
 
I think I read somewhere _Gea saying that 256GB is too much memory.

I can't imagine this being a hard and fast rule. For a specific use case it may be too much, as in "you spent too much money on that ram".
 
Lets say that your average block size is 32kb, then you would need around 8MB per GB, or 8GB per TB.

With 256GB ram, you can then size your L2ARC cache up to 32TB. Most likely you would spend some of your memory on other things, so anything less 30TB should be safe.

How would this play out with the default 128k record size?

Where did you get your values?

Thanks,
AUPhil
 
128k record size, just means that the largest record size can be 128kb. If you have a lot of small files or use compression, the record size often is smaller.

My numbers are from an earlier PostgreSQL database were I had a zfs file system with the recordsize set to 8k. Per gigabyte consumed 32MB in the HEADER part of the ARC.
 
I have not used that many RAM myself. It is currently not suggested to use more than 128GB RAM on ZFS. I have heard that Nexenta adressed this in the new NexentaStor 4 but I have not heard of experiences about.

You may read the blog of nex7 about
http://nex7.blogspot.de/2013/03/readme1st.html

or google about "zfs 128 GB deadlock "
 
Last edited:
I have not used that many RAM myself. It is currently not suggested to use more than 128GB RAM on ZFS. I have heard that Nexenta adressed this in the new NexentaStor 4 but I have not heard of experiences about.

You may read the blog of nex7 about
http://nex7.blogspot.de/2013/03/readme1st.html

or google about "zfs 128 GB deadlock "



richard.elling said:
...There are a number of problems identified on 'large memory systems' (> 128-192 GB or so) that have culminated in Nexenta forcing the ARC max to 128 GB on many builds today. Identification and resolution of the bugs is ongoing. AFAIK, at least some of the identified issues will effect any illumos-based OS, not just Nexenta [which is a bit older], but I can't speak for Linux or FreeBSD ZFS. The common wisdom in the field is 128-192, and also that it is enough to limit ARC. ...


http://www.listbox.com/member/archive/182191/2013/11/sort/time_rev/page/4/entry/19:173/




To me, his reads like OP should be OK with 256GB RAM if he caps his ARC @ 128GB?
 
The only way you can actually know how much L2ARC you need is to test it. With 256GB of RAM, why not just start out without any L2ARC at all?

Keep an eye on performance and your ARCSTAT numbers.
 
In my opinion, money is always better spent on ARC then L2ARC. I always paid for RAM and left the whole L2ARC thing alone.

This works fine for me. I do have to say I never worked with servers with more then 128Gigs of RAM. I was unaware that going above 128G is somehow detrimental.
 
I was wondering if anyone could comment from experience on sizing L2ARC ssd caches.

Given: Server memory is 256GB
Pool size: Hundreds of TB (most is archival - rarely used)
Typical Data Frequently Accessed: Oracle SMB serving standard Windows CIFS home directories (some Linux, but it's 1 Linux user to ~50 Windows users)

Can L2ARC on SSD increase CIFS transfer ?!
 
I've seen the formula for l2arc sizing range from 1:5 to 1:38 (GB of ARC to GB of L2ARC respectively). But as a previous poster said the only way to know for sure is to monitor arcstat or arc_summary. I've also read that the L2ARC is meant to improve performance or do nothing (however it has been argued that it can hurt performance). If I'm not mistaken, technically it can hurt performance however anytime you don't have to retrieve data from spinning disks its a good thing... Even if that means putting a little pressure on the ARC to map the index of L2ARC.
 
It's also recommended knowing your working dataset size before getting into L2ARC and at that scale (128GB+) I'd venture a guess and say that unless you know for a fact otherwise don't mess with it.
 
Can L2ARC on SSD increase CIFS transfer ?!

I think so, but I'm not positive or rather I don't have numbers to show but personally it helped me out. Again no numbers but I've also read and witnessed first hand that having L2ARC speeds us NFS.
 
Back
Top