dadomonster
n00b
- Joined
- Feb 21, 2015
- Messages
- 2
Hi,
As intro, I have played with comps as far as I can remember, but never had to do a proper server setup for software raid. I've read a few of the build logs on the forum and elsewhere on the net and have a basic understanding what is required.
Main goal:
Setup Raid-Z2 for 8xSATA drives = 6storage + 2parity.
Usage will be a home share for videos/music/photos and run 6 VMs at the beginning.
For the parts I've thought something along these lines:
MOBO: Supermicro X9??? (X9SCM-F, X9SCA-F, ..)
CPU: Intel Xeon E5-???? because of 32GB+ in the future (or E3-1230v3 if 32GB is enough)
MEM: 2-4x Kingston ValueRAM 8GB DDR3 unbuffered ECC
SSD1 (for OS): 60GB SSD
SSD2 (for ZFS cache/zil): 60GB SSD
CASE: X-CASE RM 420 -GEN II (or the 424, but wont have usage for that many disc slots anyway) (http://www.xcase.co.uk/4u-rackmount...i-sas-new-version-due-soon-279-00-x-case.html)
NIC: Onboard 1GB (1 or 2) with Intel chip, or external Intel card when needed in the future
SAS-card: Onboard (pref), or LSI card (or IBM M1015 etc. with LSI flash)
What I've gathered here is one way to set things up (napp-it.org):
1) ESXi and OpenIndiana to the SSD
2) "Share" SAS-card to OpenIndiana -> this requires VT-d = Xeon.
3) The SAS-card will run the ZFS on the OpenIndiana.
4) ZFS is shared back to the ESXi and VM:s are run from there.
Questions:
Q1) What MoBo would you recommend from the supermicro range? Could I do the setup with the onboard connectors or do I need the LSI/IBM card?
Q2) I remember reading that there is some restrictions on the external cards to watch, regarding the support of 3TB and 4TB drives? Speed of them seems to be 6GB/s, should it be faster or will the price go up too much?
Q3) I would like to run Win2012 server from VM to share the ZFS to internal network, is this dumb idea? I usually have used samba or NFS but samba has performance issues with higher speeds and we use some Windows machines so NFS might not be viable?
Q4) What software setup would you recommend for Raid-z2 and running VM:s (VM:s will not be on SSD as the performance is not critical)
Q5) Is the X-Case fine for the build, is the something special on the backplanes that should be checked, do they affect the speed?
Q6.1) What if I were to do a HDD setup of 10+2, would this be viable as the card hosts 8 discs, would I need another SAS-card or could I use the MoBos connectors? Would this affect the speed of the HDD setup when discs are not on the same card? Does this affect the VT-d or does VT-d just come from the CPU (I think so)?
Q6.2) What I've read the number of HDD should be power of two, not calculating the parity drives, if using 10+2 configuration would there be a huge loss of space due to recordsize vs 4k HDD block sizes?
Q7.1) How does the raid health monitoring work in Raid-Z2? Do I need some specific program to be run or do I use some script that calculate checksums weekly/daily?
Q7.2) What if one drive breakes, lets assume I have a new drive on the shelf waiting (=cold spare), I change it in and run some command and the server takes n-days to resync the raid? Will the raid be usable during the resync?
Q8) What different firmwares are used on the SAS-cards, do I need the IT-firmware for LSI/IBM card in this case?
If you can point me to howtos/tutorials for install part of software it would be great
Sorry for the noob questions, nowadays I don't have too much time daily to sit and surf the net for every detail or to experiment with new gear
Any tips for succesful build greatly appreciated!
As intro, I have played with comps as far as I can remember, but never had to do a proper server setup for software raid. I've read a few of the build logs on the forum and elsewhere on the net and have a basic understanding what is required.
Main goal:
Setup Raid-Z2 for 8xSATA drives = 6storage + 2parity.
Usage will be a home share for videos/music/photos and run 6 VMs at the beginning.
For the parts I've thought something along these lines:
MOBO: Supermicro X9??? (X9SCM-F, X9SCA-F, ..)
CPU: Intel Xeon E5-???? because of 32GB+ in the future (or E3-1230v3 if 32GB is enough)
MEM: 2-4x Kingston ValueRAM 8GB DDR3 unbuffered ECC
SSD1 (for OS): 60GB SSD
SSD2 (for ZFS cache/zil): 60GB SSD
CASE: X-CASE RM 420 -GEN II (or the 424, but wont have usage for that many disc slots anyway) (http://www.xcase.co.uk/4u-rackmount...i-sas-new-version-due-soon-279-00-x-case.html)
NIC: Onboard 1GB (1 or 2) with Intel chip, or external Intel card when needed in the future
SAS-card: Onboard (pref), or LSI card (or IBM M1015 etc. with LSI flash)
What I've gathered here is one way to set things up (napp-it.org):
1) ESXi and OpenIndiana to the SSD
2) "Share" SAS-card to OpenIndiana -> this requires VT-d = Xeon.
3) The SAS-card will run the ZFS on the OpenIndiana.
4) ZFS is shared back to the ESXi and VM:s are run from there.
Questions:
Q1) What MoBo would you recommend from the supermicro range? Could I do the setup with the onboard connectors or do I need the LSI/IBM card?
Q2) I remember reading that there is some restrictions on the external cards to watch, regarding the support of 3TB and 4TB drives? Speed of them seems to be 6GB/s, should it be faster or will the price go up too much?
Q3) I would like to run Win2012 server from VM to share the ZFS to internal network, is this dumb idea? I usually have used samba or NFS but samba has performance issues with higher speeds and we use some Windows machines so NFS might not be viable?
Q4) What software setup would you recommend for Raid-z2 and running VM:s (VM:s will not be on SSD as the performance is not critical)
Q5) Is the X-Case fine for the build, is the something special on the backplanes that should be checked, do they affect the speed?
Q6.1) What if I were to do a HDD setup of 10+2, would this be viable as the card hosts 8 discs, would I need another SAS-card or could I use the MoBos connectors? Would this affect the speed of the HDD setup when discs are not on the same card? Does this affect the VT-d or does VT-d just come from the CPU (I think so)?
Q6.2) What I've read the number of HDD should be power of two, not calculating the parity drives, if using 10+2 configuration would there be a huge loss of space due to recordsize vs 4k HDD block sizes?
Q7.1) How does the raid health monitoring work in Raid-Z2? Do I need some specific program to be run or do I use some script that calculate checksums weekly/daily?
Q7.2) What if one drive breakes, lets assume I have a new drive on the shelf waiting (=cold spare), I change it in and run some command and the server takes n-days to resync the raid? Will the raid be usable during the resync?
Q8) What different firmwares are used on the SAS-cards, do I need the IT-firmware for LSI/IBM card in this case?
If you can point me to howtos/tutorials for install part of software it would be great
Sorry for the noob questions, nowadays I don't have too much time daily to sit and surf the net for every detail or to experiment with new gear
Any tips for succesful build greatly appreciated!