ZFS RaidZ2 Homeserver - build log

seku

n00b
Joined
Mar 3, 2014
Messages
42
Finally managed to source all the parts i wanted, AND found the time to assemble the beast :D .

Related treads :
selection of parts : ZFS RAIDz2 Homeserver - sanity check
discussion about OS use : ZFS & Encryption - Information overload

For reminders, the gear :

Without further ado, here's the 4U Xcase 424. Quite positively surprised by it; maybe not as sturdy and clunky as a highend chenbro i had before, but least it will be way more quiet :)
http://imgur.com/VJy7N8Y
VJy7N8Ys.jpg


The Supermicro mainboard with only the processor and RAM installed... i love how there's so many free ram slots with already 32 gigs installed :
http://imgur.com/f1ZJ3Ei
f1ZJ3Eis.jpg


Unpacking the HDs was a joyful moment, and i had to mount them as obligatory hard disk wall :D
http://imgur.com/MwilbSw
MwilbSws.jpg


Here's a 4 Port Intel GBe controller card... should come in handy for the server's routing purpose (more details on that later)
http://imgur.com/RqyBY30
RqyBY30s.jpg


And the impressive LSI 9201-16i SAS2 controller card. This puppy allows for 16 HDs to be connected, and with the 8 port LSI controller on the Supermicro mobo, this ups it to a grand total of 24 drives... conveniently
http://imgur.com/4Ya68x
4Ya68xOs.jpg


I underestimated the sturdyness of those SAS cables a lot... and i thought they would be really flexible... they weren't. but patience worked it out :)
http://imgur.com/yimAxkw
http://imgur.com/VKgCgxG
yimAxkws.jpg
VKgCgxGs.jpg


and now for the final build with all cards & cables installed :
http://imgur.com/h4ZoZAp
h4ZoZAps.jpg


The two black pieces you can see on the right are Startech S-Ata hotswap bays :
http://www.amazon.com/StarTech-com-2-5in-Removable-Drive-Expansion/dp/B002MWDRD6
These hotswaps contain the VMstore Samsung SSD drive, and the 1TB spinning drive it will use for backing up the VMS.

ESXi is installed onto a USB stick which conveniently sits on the motherboard itsef.
Btw, installing ESXi was a piece of cake, and i'm in love with Supermicro's IPMI implementation : only had to put the Vmware ISO on a network share, tell the motherboard via IPMI to mount it to a virtual CD drive, and reboot :
Voilà, ESXi installation starts. That's one feature i will already dearly miss in my next desktop builds.

So, i'm really happy with the build for now.

Next steps :
  • Install ESXi vsphere on windows
  • Update Motherboard, LSI card BIOS
  • Flash the SAS controller on the Supermicro board to HBA
  • Install the SSD and backup drive and dedicate them to VM Store

After that i will try to implement the following ESXi setup :
S9xRRMn.png


The main idea is that i want to segregate networks and give them different access rules via Pf-sense:
  • The LAN network will be for normal home network traffic, nothing special here
  • The green one will be internet for the VM servers. Access will be severely limited, probably only to update servers (apt-get, ports, ...)
  • Management network (yellow) won't have access to the internet : here and only here are IPMI, ESXi management, and VM SSH available.
  • Access from the Internet to management will be possible only through VPN-authorised rerouting in Pf-Sense.

Thanks for reading, and I'm looking forward to your opinions, suggestions and ideas for furthering the build :D
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
I wonder how the sound is on that chassi? I bought the rm 424 pro version and when i copy files or something like that the fans really start to roar.
 
cannot say just yet, HDs aren't installed. i'll test it once i get far enough in the build :)
 
Whats the idea behind 10Gb to the backup server. And what are the details on your backup server?
 
Whats the idea behind 10Gb to the backup server. And what are the details on your backup server?

Actually that's just the second 10GBe interface of the Supermicro Mobo connecting to the fileserver... i doubt i will add a 10GBe card to the backup server.

I need to rebuild the backup server first, as is not up to speed yet. Probably will reuse one old core2quad mobo, add an M1015 + SAS expander, and do a pool of 10x2TB RAIDZ2 and 4x5TB RAIDz1.
 
Yesh :)
the server is running ESXi nicely, and i'm experimenting with a few OS' (FreeBSD, Nas4Free and Gea's Napp-it)... i'll soon commit to an OS and migrate the data from my old fileserver.

Progress was slower than i thought, as work was too busy... so were my holidays :)
 
Yesh :)
the server is running ESXi nicely, and i'm experimenting with a few OS' (FreeBSD, Nas4Free and Gea's Napp-it)... i'll soon commit to an OS and migrate the data from my old fileserver.

Progress was slower than i thought, as work was too busy... so were my holidays :)

I totally understand, as its always been slow motion for me when switching platforms. I'm interested in anything you have to share about using 10Gb and what you've found so far.

Thanks!
 
Nice setup. I'm planning on building a very similar box for my next freenas file server.

I had never considered using the motherboards SAS to control some of the drivers. I was planning on using an LSI 9211-8i + RES2SV240 expander and use all ports on the RES2SV240 to connect to the 24 drives. But i might be able to save some money if i got an LSI 9201-16i + motherboard SAS like you did.

My only concern would be I wanted to do 12 drives in a RAID-Z2 and then another 12 drives in Z2 in the future.

my question is: can the LSI 9201-16i control the onboard SAS or are the two isolated? meaning, can I use the 9201 + the onboard to make a 12disk Z2 after already making a 12 disk Z2 with the 9201 to begin with?

I suppose with this setup you could do an 8 disk z1/z2 with the onboard then make a 16 disk z3 with the 9201, but in my case I really want to do two 12 disk Z2's
 
Last edited:
my question is: can the LSI 9201-16i control the onboard SAS or are the two isolated? meaning, can I use the 9201 + the onboard to make a 12disk Z2 after already making a 12 disk Z2 with the 9201 to begin with?

ZFS doesn't care what the drives are connected to as long as they are available to the underlying OS, so yes you could do 12 and 12.
The only problem would be if you were virtualing. The whole controller must be passed through to a single guest so you couldn't use one of the onboard ports for a datastore drive and the remaining 7 for ZFS.
 
If this is for a virtualization workload, I would not recommend any form of raidz*. It will work, but the random IOPs will suck, depending on how many vdevs you have. Raid10 will give you far better random IOPs, especially for reads.
 
Small update: had some time to work this weekend on the server.... man can server hardware and ESXi be bitchy... first the network card wasnt recognized during boot and didnt show up in lspci .... probably some DMA trouble, so i had to swap cards around till they finally were all recognized.

Then ESXi decided not to know my quad port Intel NIC... and i had to generate an ESXi install ISO with the added drivers.

Now all's well though, and i can *finally* start with the fileserver OS. After much thought I will be trying Debian 7.5 first.... with ZoL. Firstly because it does ZFS, does encryption via LUKS, extremely stable on ESXi, no vmxnet woes i know of, Samba v4 and most importantly, Gea's napp-it runs on it.

I will give it a whirl in the next few days.
 
Now all's well though, and i can *finally* start with the fileserver OS. After much thought I will be trying Debian 7.5 first.... with ZoL. Firstly because it does ZFS, does encryption via LUKS, extremely stable on ESXi, no vmxnet woes i know of, Samba v4 and most importantly, Gea's napp-it runs on it.

Best of all ZFS storage, iSCSI/NFS and SMB server for me is Solaris & clones like OmniOS. But when it comes to encryption and well supported user apps, Linux wins. (Maybe Solaris 11.2 can change this regarding encryption but have not tested myself. Solaris 11.1 with in ZFS included encryption was quite slow regarding encryprion)
 
Best of all ZFS storage, iSCSI/NFS and SMB server for me is Solaris & clones like OmniOS. But when it comes to encryption and well supported user apps, Linux wins. (Maybe Solaris 11.2 can change this regarding encryption but have not tested myself. Solaris 11.1 with in ZFS included encryption was quite slow regarding encryprion)

Unfortunately Solaris 11.2 is closed source; wish Oracle would take a more open stance. As for encryption i have been using it since my truecrypt-vista days (don't ask), and i feel i would take a step backward if i don't use it. I'm also more familiar with debian, although i can't say i'm fluent.

Next planned steps :
- install a vanilla AES encrypted debian64 7.5 to a 16gig VM
- read up on udev documentation. I want udev to assign device names by controller port. This is important to me, as i want to know which failed HD to replace**. I will probably tell udev to name the HDs disk_a4 (first column/pool, 4th row).
- create a LUKS device for each hard disk and use the same passphrase as for the debian boot. i'll name them crypt_a4 for example
- add the LUKS keys from debian boot to the HD LUKS devices ... that way they will all unlock on boot and shouldn't bother ZFS.
- build a pool of 2 x raidz2 6 disks.
- install smb4
- do a lot of tests (performance, simulated disk failure, etc)


** to explain this point : i always had problems on my fileservers with changing device names... remove 2nd out of 3 hds, and the third hd (sdc) suddenly becomes sdb. I also didn't fancy disk identification by UUID as this felt unhandly for failed disk replacements. So now i want to tell udev to assign device names according to controller ports... meaning each HD bay will always get the same device name. In case of failed HD swap :
1° remove failed hd, easily identified by its device name (disk_a4)
2° pop in a new hd that gets the same device name
3° encrypt the hd with LUKS and add the boot keys
4° put the new crypt_a4 into the array

that's the theory at least... i will have to test how it works out. encryption is always a bit unwieldly, but this felt like the smoothest way to do it... as usual i'm grateful for any suggestions
 
Last edited:
This looks really sick. Thanks for sharing. Is this for personal or professional use?
 
** to explain this point : i always had problems on my fileservers with changing device names... remove 2nd out of 3 hds, and the third hd (sdc) suddenly becomes sdb. I also didn't fancy disk identification by UUID as this felt unhandly for failed disk replacements.
This should not occur unless the system is rebooted with the 2nd hd removed. If that is the case, it is an issue only if you are mounting each hd by the device name (eg. /dev/sdc), which should not be required in a RAID setup. I cannot comment on ZFS, but mdadm will identify all devices in the RAID volume regardless of the device name.

So now i want to tell udev to assign device names according to controller ports... meaning each HD bay will always get the same device name. In case of failed HD swap :
1° remove failed hd, easily identified by its device name (disk_a4)
2° pop in a new hd that gets the same device name
3° encrypt the hd with LUKS and add the boot keys
4° put the new crypt_a4 into the array

that's the theory at least... i will have to test how it works out. encryption is always a bit unwieldly, but this felt like the smoothest way to do it... as usual i'm grateful for any suggestions
I am very interested on how you setup your udev rules to assign each hd name. IMO, it's not really necessary since you should already know to which port /dev/sdc is connected (i.e., the third sata port).
 
Back
Top