Fileserver hardware and technology questions for new build

Joined
Feb 21, 2015
Messages
2
Hi,

As intro, I have played with comps as far as I can remember, but never had to do a proper server setup for software raid. I've read a few of the build logs on the forum and elsewhere on the net and have a basic understanding what is required.

Main goal:
Setup Raid-Z2 for 8xSATA drives = 6storage + 2parity.
Usage will be a home share for videos/music/photos and run 6 VMs at the beginning.

For the parts I've thought something along these lines:
MOBO: Supermicro X9??? (X9SCM-F, X9SCA-F, ..)
CPU: Intel Xeon E5-???? because of 32GB+ in the future (or E3-1230v3 if 32GB is enough)
MEM: 2-4x Kingston ValueRAM 8GB DDR3 unbuffered ECC
SSD1 (for OS): 60GB SSD
SSD2 (for ZFS cache/zil): 60GB SSD
CASE: X-CASE RM 420 -GEN II (or the 424, but wont have usage for that many disc slots anyway) (http://www.xcase.co.uk/4u-rackmount...i-sas-new-version-due-soon-279-00-x-case.html)
NIC: Onboard 1GB (1 or 2) with Intel chip, or external Intel card when needed in the future
SAS-card: Onboard (pref), or LSI card (or IBM M1015 etc. with LSI flash)

What I've gathered here is one way to set things up (napp-it.org):
1) ESXi and OpenIndiana to the SSD
2) "Share" SAS-card to OpenIndiana -> this requires VT-d = Xeon.
3) The SAS-card will run the ZFS on the OpenIndiana.
4) ZFS is shared back to the ESXi and VM:s are run from there.


Questions:
Q1) What MoBo would you recommend from the supermicro range? Could I do the setup with the onboard connectors or do I need the LSI/IBM card?
Q2) I remember reading that there is some restrictions on the external cards to watch, regarding the support of 3TB and 4TB drives? Speed of them seems to be 6GB/s, should it be faster or will the price go up too much? :)
Q3) I would like to run Win2012 server from VM to share the ZFS to internal network, is this dumb idea? I usually have used samba or NFS but samba has performance issues with higher speeds and we use some Windows machines so NFS might not be viable?
Q4) What software setup would you recommend for Raid-z2 and running VM:s (VM:s will not be on SSD as the performance is not critical)
Q5) Is the X-Case fine for the build, is the something special on the backplanes that should be checked, do they affect the speed?
Q6.1) What if I were to do a HDD setup of 10+2, would this be viable as the card hosts 8 discs, would I need another SAS-card or could I use the MoBos connectors? Would this affect the speed of the HDD setup when discs are not on the same card? Does this affect the VT-d or does VT-d just come from the CPU (I think so)?
Q6.2) What I've read the number of HDD should be power of two, not calculating the parity drives, if using 10+2 configuration would there be a huge loss of space due to recordsize vs 4k HDD block sizes?
Q7.1) How does the raid health monitoring work in Raid-Z2? Do I need some specific program to be run or do I use some script that calculate checksums weekly/daily?
Q7.2) What if one drive breakes, lets assume I have a new drive on the shelf waiting (=cold spare), I change it in and run some command and the server takes n-days to resync the raid? Will the raid be usable during the resync?
Q8) What different firmwares are used on the SAS-cards, do I need the IT-firmware for LSI/IBM card in this case?

If you can point me to howtos/tutorials for install part of software it would be great

Sorry for the noob questions, nowadays I don't have too much time daily to sit and surf the net for every detail or to experiment with new gear :)

Any tips for succesful build greatly appreciated!
 
If you are looking for an all-in-one/napp-in-one solution:

Q1
X10SL7-F is used quite often if max 32GB RAM is ok
X9SRH-7TF is perfect if you like the full program (max 512GB RAM, LSI HBA, Dual 10Gb nic)

Q2
LSI 2008 cards support disks > 2TB, no problem with above mainboards

Q3
I would first try the Solaris CIFS server as it fully supports Windows ACLs and you can use ZFS snaps as Windows previous versions. If you want to use Windows for sharing, you can use iSCSI to use ZFS disks like a local Windows disk.

Q4
I would use one small SSD only pool for VMs (like a mirror) and a raid-Z2/Z2 for general storage

Q5
I would prefer a 19" server case with a backplane for a easier disk handling. (not performance relevant)

Q6.1
I would use high capacity disks like a raid-Z2 with 6 or 10 disks or a Z3 with 7 or 11 disks

Vt-d (pass hardware directly to a VM) works on a per PCI-device level. If you need more disks, you need another pci-s device (or must use ESXi + RDM disk mapping)

Q6.2
Usable capacity is higher if you use data disks as a power of 2 (I would prefer this with 4k disks)

Q7.2
ZFS checks checksums on any read and auto-rapairs on problems.
You can start a scrub ex monthly to check all data - all included in ZFS

Q7.2
a hotspare disk replaces a faulted disk automatically (or on demand with a disk replace).
Beside a slight speed degration you can use the pool normally

Q8
Your disk controller should pass disks without a hardware raid-layer to ZFS.
Best is a raidless IT firmware but a IR firmware (raid 1/10) mostly works as well.
Avoid raid 5/6 capable cards

If you use a card like LSI 9211 or 9207 prefer IT firmware release 19 (v20 can have problems)
I would prefer OmniOS (stable) over OI (dev)
Skip the SSD L2ARC cache, use more RAM
If you need a ZIL, use an Intel S3700 (either as ZIL for slower SSDs or as SSD only pool without ZIL)

some of (my) howto's

http://www.napp-it.org/doc/downloads/napp-it.pdf
http://www.napp-it.org/doc/downloads/napp-in-one.pdf
http://www.napp-it.org/doc/manuals/flash_x9srh-7tf_it.pdf

or
http://www.servethehome.com/installing-configuring-napp-it-openindiana-zfs/

download manuals from Oracle about Solaris 11 Express (perfect for OmniOS as well)
(click on download or you are linked to newer Solaris 11.2 manuals)
http://archive.today/snZaS
 
Last edited:
What I've gathered here is one way to set things up (napp-it.org):
1) ESXi and OpenIndiana to the SSD
2) "Share" SAS-card to OpenIndiana -> this requires VT-d = Xeon.
3) The SAS-card will run the ZFS on the OpenIndiana.
4) ZFS is shared back to the ESXi and VM:s are run from there.

May I suggest Server 2012R2 or Windows 8 Pro as the Hyper-V host, then use your OI + NappIT on this? Hyper-V is well known to support drive and hardware pass-through than VMware bases.
 
Thanks for the great reply _Gea.
Lost-Benji: I think I'll stick with ESXi as hypervisor for now as the support on the net is better (for my liking).
PHubb: I'm from Finland so postage on that was more than the cost :)


Checked those mobos and the X9SRH goes bit outside my needs, X10SL7-F seemed quite nice as I could use the onboard LSI for now. Does it work out-of-the-box (with upgrade to latest firmware) or does it need any tricks before using (other than disabling the raid)?

I assume the installations of ESXi and OmniOs can be done via IPMI or do I need to use a gpu-card as xeons do not have those integrated?

If I were to go for a mirror SSD for the VMs, then I'd need the extra SAS-card (if not using the onboard 2x SATA (6Gbps)), no problem, as the X10 in question has the slots for pci-express. Would it matter in the future that they are 8x and 16x if I need a third card for another z2/8dics?
 
Last edited:
ESXi may not support newest Intel nics out of the box, then you need to add i210 drivers
http://www.servethehome.com/install-vmware-esxi-5x-intel-i210-intel-i350-ethernet-adapters/

There is a GPU onboard on SuperMicro serverboards, no need for an extra GPU

1150 chipsets have a limited numer of pci lanes, so you can use max 3 storage adapters (one already onboard). It does not really matter if 4x or 8x unless you are using lots of high performance SSDs - no need for 16x
 
I'm looking at getting one of these for a build.

http://www.ebay.com/itm/Supermicro-...54?pt=LH_DefaultDomain_0&hash=item566fd2e526#

24gb of RAM with only 1/3rd of the slots used. That's 72gb of RAM populated with the not-so-expensive 4gb sticks.

And it's got PCI-E and PCI-X slots. The latter should be a decent source for inexpensive HBA's which meet my needs. In addition to 6 SATA slots already on the board.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
I believe ESXi 5.5 U2 now supports the i210 out of the box
 
Back
Top