How much CPU power do I really need for ZFS

DeChache

Supreme [H]ardness
Joined
Oct 30, 2005
Messages
7,087
The time has come to rebuild my home NAS its old and doesn't really support enough ram for consistent performance.

I've been running on a L5630 with 16GB DDR3 ECC for the last year or so and it does its job but cache hits and paging can get back and considering I use it mainly to back ESXi hosts it gets annoying.

Currently it has 3 different arrays a
  • RaidZ1 of 4 160 GB SSDs
  • 4 Drive Mirror of 600GB 10K drives with 2 SSDs for ZIL
  • RaidzZ1 of 6 3TB Reds in 3 drive groups. (Mainly just file serving but I do have a couple large VMs here too. nothing high traffic)

I hope to expand the SSD array enough in the near future to get rid of the 600GB 10K drives.

I'm considering just going with a Skylake based i3-6100 with 32GB of ECC (Supports up to 64GB) to replace the above box. The L5630 is a 4C 8T chip but its old only runs at 2.13 GHz and almost never shows any load. So my though is that a 2C 4T chip at 3.8GHz should be more than sufficient. I do run compression but just the base LZ4. I have no need for encryption at the volume level. I would like to look into running dedupe but haven't had it thus far so I can live with out it.

Any thoughts on using an i3 over a stronger CPU for my use case?
 
Last edited:
If you aren't running anything more than LZ4 encryption, you shouldn't have an issue with using the i3.
Naturally, I'd recommend bumping up to 64GB, but.... 32GB should net you a much better cache hit ratio.

Try to grab a board either with an M.2 slot, or with an extra PCI-E 3.0 x4 slot for NVME drives. Absolutely wonderful for L2ARC.
 
If you aren't running anything more than LZ4 encryption, you shouldn't have an issue with using the i3.
Naturally, I'd recommend bumping up to 64GB, but.... 32GB should net you a much better cache hit ratio.

Try to grab a board either with an M.2 slot, or with an extra PCI-E 3.0 x4 slot for NVME drives. Absolutely wonderful for L2ARC.

I should have a spare PCI-E slot so I will keep that in mind. 64GB is the end goal but its just not in the budget right now.... 16GB DDR4 dimms aren't horribly exspensive but are enough to make me go 32 is double what I have right now so lets start there....
 
Ok average advice above.
1. L2Arc is a waste of money for home use. I pull up to 750MB/s without it on my 8 drive Z2 array. Saturating gigabit is easy.

2. You don't need 32 or even 64 gig of ram, 10-12gig is ample, 16 is a reasonable figure if you're buying new sticks. The only reason to have more is jails or virtualisation, and you can fit a few jails in 16gig.

3. Most of the time my (virtual) CPUs barely go above idle. Fast chips aren't required. An i3 is ample, a g4400 would probably work out fine (and it does have ECC support)
 
Ok average advice above.
1. L2Arc is a waste of money for home use. I pull up to 750MB/s without it on my 8 drive Z2 array. Saturating gigabit is easy.

2. You don't need 32 or even 64 gig of ram, 10-12gig is ample, 16 is a reasonable figure if you're buying new sticks. The only reason to have more is jails or virtualisation, and you can fit a few jails in 16gig.

3. Most of the time my (virtual) CPUs barely go above idle. Fast chips aren't required. An i3 is ample, a g4400 would probably work out fine (and it does have ECC support)

The only thing I will/should add is that my back end is 10GBe and I generate a lot of random I/O to the SSD arrays in fact its probably 90% random. Right now I'm having trouble pushing more than 2-300 MB/s to any of my arrays.
 
Then I think your bottleneck will be ram and the SSDs you use. Ram means more room for caching, L2Arc might have a place in that box if it's fast enough.
 
What kind of random OI is this? If you have a lot, then you should consider going for a SSD build. And while we're at that, SSD's today are so reliable that you should be fine with striping, just have some backup.
 
What kind of random OI is this? If you have a lot, then you should consider going for a SSD build. And while we're at that, SSD's today are so reliable that you should be fine with striping, just have some backup.

VM traffic and Databases
 
If you need better performance, increase your RAM. If you're doing VM traffic, you should do mirrored pairs. After that, if you still have performance issues, then L2ARC might make sense. Of course, a SLOG would be worthwhile too.
 
Why not just destroy the first array and repurpose those ssds as L2ARCs? And simply add more ram to the existing setup? new cpu mobo and ram looks as an unnecessary update.
 
Why not just destroy the first array and repurpose those ssds as L2ARCs? And simply add more ram to the existing setup? new cpu mobo and ram looks as an unnecessary update.

Because I like my SSD Array :) I'm consolidating the 10K array onto the SSDs. The 10K drives are as old as the rest of the system its time for them to be retired.

The current motherboard is flaky and tapped out when it comes to RAM. I don't want to invest any more money into a 6 year old platform. Its time for a new system.
 
I appears that you are looking for IOPS more then sequential speed, so here is what I would do if I were you:
  1. Try and sell the 4 160 gig ssd's and buy 2 NVME ssd's of 512G. (Or 2 x 1T if there is stretch on the budget, stay clear of raidz1 as it reduces IOPS to the level of a single drive.)
  2. Ditto for the 10k drives.
  3. Raise the RAM as high as the budget allows.
  4. Forget about L2ARC and SLOG devices. Especially do not use the 160G SSD's for that as the cache would then be slower then the actual drives.
  5. Forget about dedupe as even 64G is not enough RAM for it.
It's a home lab system. Simpler is better.

In summary:
Lots of RAM, NVMe SSD array for fast VM hosting, in raid 10. Raidz1 or raidz2 pool consisting of spinning rust drives for bulk storage.
 
I appears that you are looking for IOPS more then sequential speed, so here is what I would do if I were you:
  1. Try and sell the 4 160 gig ssd's and buy 2 NVME ssd's of 512G. (Or 2 x 1T if there is stretch on the budget, stay clear of raidz1 as it reduces IOPS to the level of a single drive.)
  2. Ditto for the 10k drives.
  3. Raise the RAM as high as the budget allows.
  4. Forget about L2ARC and SLOG devices. Especially do not use the 160G SSD's for that as the cache would then be slower then the actual drives.
  5. Forget about dedupe as even 64G is not enough RAM for it.
It's a home lab system. Simpler is better.

In summary:
Lots of RAM, NVMe SSD array for fast VM hosting, in raid 10. Raidz1 or raidz2 pool consisting of spinning rust drives for bulk storage.

I wish to bad those NVME drives would cost about as much as the system I'm considering :) Maybe some day. The 160s are Intel DC S3500s so they are good drives not the fastest but they are consistent. Some day they will get upgraded and these will get moved to boot disks. What you describe would be my end goal... The current system has grown organically as I've needed space and then speed. The new box I hope to do right once and just let run...

I'm only using the SLOG right now with the 10K drives might move it to the Array of Reds after the 10k drives are gone but probably not they are just used for file storage(Plex, Seafile, Git LFS, File Shares, and Security Cam Archives) and they perform adequately right now and the added ram should help there too.
 
Back
Top