Home lab ZFS Server Build

iamwhoiamtoday

Limp Gawd
Joined
Oct 29, 2013
Messages
493
I'm in the process of building out a ZFS file server for three XenServer Hypervisors.

This is in one of the HP Prolait G6 2U chassis with 25*2.5 SAS bays sitting in the front. (2*L5520's + 48GB memory)
These all are hooked up to an LSI HBA (not sure of the model, it's one of 'em that plays nice with HP SAS expanders, shouldn't be an issue) (in the mail to arrive next week)

I have two 128GB M.2 SSDs on the same PCI-E 8x card with the intention of using them for L2ARC.
There is an Intel 10gbps Fiber NIC taking up the remaining PCI-E slot. (connected to main switch)

Now, the crux of the matter and the real question I need to ask you:
Hard drive configuration. I'm planning on booting this system from a USB drive.

Out of the 25 hotswap bays in the front, my drive breakdown looks like this:
5*120GB SATA SSDs
5*500GB SAS 7.2k HDDs
15*146GB SAS 10k HDDs (in the mail)


What configuration should I look at running them within ZFS?
At first I was planning on putting each set of 5 into its own RAIDz1. That'd basically give me 5*RAIDz1 vdev's together in the same pool.

Capacity isn't a major issue, I have two UnRaid storage servers for bulk data / backups.
This system will be backed up to them probably once a day.

If I want to focus on performance, what configuration of drives should I go with?
Would I run into issues by having vdevs of radically differing performance in the same pool?




Each of my hypervisors is hooked up to the switch via 4*1gbps bonded copper.

Please note that this is a home lab environment and purely for giggles. Everything is on UPS's and I'm pretty good about running my backups. These four systems (ZFS file server plus 3 hypervisors) will only be on during home-lab time.(weekends)

Thank you for your time and input in this matter.
 
Last edited:
Performance wise I would trash the 120G Sata disks or use them as bootdisk instead USB or backup,
With the 10k disks you can build two pools

Pool 1: Raid-10 with the 5x500G disks + hotspare = 1 TB usable, 200-300 iops
Pool 2: Multiple Raid-10 with the 15x146 G from 7 vdevs + hotspare = around 1 TB usable, less than 1000 iops

From the NVMe, I would build a high performance mirror with 128 GB and 20000-100000 iops depending model.
(Performance wise especially regarding iops, SSDs are far better than 10k SAS - better with a factor 100 to 1000).
 
Awesome, can do. I can probably cheaply replace the 5*120GB SATA SSDs with 146GB 10k SAS drives. That'd let me have a 20 drive effective RAID10 setup with 10 vdevs

I'd love to go with the pipe-dream of one day populating all 25 slots with SATA SSDs, but that'd get pricey FAST.

Given the 48GB of memory for ARC, I don't think that IOPS will be too much of an issue, I don't tend to run a crazy amount of VMs that need crazy IOPS.


Note: my 500GB SAS drives are 7.2k, I've updated my initial post with this info.
 
Back
Top