I originally wanted to put together a server to store my ever-growing media collection. This sort of snowballed into more than just a media server--it would also be the server for HARC Technology (my business).
So now the plan is...
2 x Head Nodes
1 x 4U JBOD Chassis/Storage Shelf (45 x 3.5" Disk Slots)
1 x 2U JBOD Chassis/Storage Shelf (12 x 3.5" Disk Slots)
(NB: I might be missing some totally obvious things, so as such please point them out. Also, I have very little idea what I should have setup for VMware as far as networking etc. go, so I haven't included that and would be really appreciate of any advice as to that.)
SPECIFICATIONS
Head Nodes (x2)
SuperMicro 826A-R1200LPB
SuperMicro X8DTH-6F
2 x Intel Xeon E5645 Hex-Core 2.40GHz CPU
12 x Crucial 8GB DDR3-1333 Reg ECC 1.5V RAM (96GB Total)
2 x Intel 320 SSD 40GB (RAID1 Mirror for ESXi)
3 x LSI SAS 9205-8e HBA (2 for 4U & 1 for 2U using dual-uplinks)
Intel 10GbE AF DA Dual Port Server Adapter (2 x 10GbE) [Is this a good choice?]
2 x Intel Gigabit Quad Port Server Adapter (2 x 4 x 1GbE) [Which one? I350, I350, ET, ET2, EF...]
(So...this gives 10 x 1GbE + 2 x 10GbE which is derived from 2 x 1GbE (Onboard) + 8 x 1GbE (Cards) + 2 x 10GbE (Card))
4U Storage Shelf
SuperMicro 847E26-RJBOD1
15 x Seagate Constellation ES.2 3TB 64MB
(HDDs: 2 x RAID-Z2 (5+2) + 1 x Hot Spare = 10 x Data + 4 x Parity + 1 x Hot Spare)
(NB: In the future, I would expand my pool by adding blocks of 15 drives based on the same structure as above, so that I can expand to the full 45 drives including 3 hot spares.)
2U Storage Shelf
SuperMicro 826E26-R1200LPB
8 x SSD for L2ARC [I'm thinking: Intel 320 SSD 160GB]
4 x SSD for ZIL (2 mirrored pairs) [I'm thinking: Intel 311 SSD 2GB. Thoughts?]
12 x LSISS9252 (One interposer for each SATA SSD)
Software
On each Head Node:
VMware vSphere Hypervisor/ESXi
Solaris 11 (VM) in Active/Passive configuration [or comparable OS--which should I be going for?] - 2 x 10GbE (1 to switch + 1 crossover between nodes)
VMs Load-Balanced between Nodes:
SBS 2011 Standard (SBS 2011 Premium) - 2 x 1GbE (teamed)
WS 2008 R2 for SQL + VMware vCenter (SBS 2011 Premium) - 1 x 1GbE
WS 2008 R2 for RDS - 1 x 1GbE
WS 2008 R2 for Lync Server - 1 x 1GbE
WHS 2011 - 2 x 1GbE (teamed)
Miscellaneous Temporarily-Run Testing VMs (mostly Windows 7) - 1 x 1GbE
(2 x on-board 1GbE for VMware use) [I'm thinking 1 for management and 1 crossover between nodes for vMotion? No idea what I should be doing here...]
Notes/Questions
- We have a MAPS + TechNet etc. for all the Microsoft production & testing licenses and for VMware we have NFR, so the cost of those are't an issue. All other software, though, would need to be acquired.
- VMware Networking & NIC allocation? (As mentioned above.)
- As per E26 versions of chassis, all drives have SAS2 MPIO implemented with a path to each head node.
- Is Solaris 11 the right choice for a ZFS storage OS to: share media storage directly to clients, maintain the data volume for SBS and hold the VM store for VMware? How do I go about implementing dual-instance redundancy?
- What do I need to do/have to allow for auto-vMotion of VMs from one node to another if one fails? How do I implement vMotion for auto-load-balancing?
- Are my specifications sufficient/overkill/appropriate?
- I'm sure more questions will come...
If any other information is required, please let me know! I appreciate all help and advice, especially as this setup is as much for learning (and I've got a lot of that to do) as it is for production and testing THANKS!
So now the plan is...
2 x Head Nodes
1 x 4U JBOD Chassis/Storage Shelf (45 x 3.5" Disk Slots)
1 x 2U JBOD Chassis/Storage Shelf (12 x 3.5" Disk Slots)
(NB: I might be missing some totally obvious things, so as such please point them out. Also, I have very little idea what I should have setup for VMware as far as networking etc. go, so I haven't included that and would be really appreciate of any advice as to that.)
SPECIFICATIONS
Head Nodes (x2)
SuperMicro 826A-R1200LPB
SuperMicro X8DTH-6F
2 x Intel Xeon E5645 Hex-Core 2.40GHz CPU
12 x Crucial 8GB DDR3-1333 Reg ECC 1.5V RAM (96GB Total)
2 x Intel 320 SSD 40GB (RAID1 Mirror for ESXi)
3 x LSI SAS 9205-8e HBA (2 for 4U & 1 for 2U using dual-uplinks)
Intel 10GbE AF DA Dual Port Server Adapter (2 x 10GbE) [Is this a good choice?]
2 x Intel Gigabit Quad Port Server Adapter (2 x 4 x 1GbE) [Which one? I350, I350, ET, ET2, EF...]
(So...this gives 10 x 1GbE + 2 x 10GbE which is derived from 2 x 1GbE (Onboard) + 8 x 1GbE (Cards) + 2 x 10GbE (Card))
4U Storage Shelf
SuperMicro 847E26-RJBOD1
15 x Seagate Constellation ES.2 3TB 64MB
(HDDs: 2 x RAID-Z2 (5+2) + 1 x Hot Spare = 10 x Data + 4 x Parity + 1 x Hot Spare)
(NB: In the future, I would expand my pool by adding blocks of 15 drives based on the same structure as above, so that I can expand to the full 45 drives including 3 hot spares.)
2U Storage Shelf
SuperMicro 826E26-R1200LPB
8 x SSD for L2ARC [I'm thinking: Intel 320 SSD 160GB]
4 x SSD for ZIL (2 mirrored pairs) [I'm thinking: Intel 311 SSD 2GB. Thoughts?]
12 x LSISS9252 (One interposer for each SATA SSD)
Software
On each Head Node:
VMware vSphere Hypervisor/ESXi
Solaris 11 (VM) in Active/Passive configuration [or comparable OS--which should I be going for?] - 2 x 10GbE (1 to switch + 1 crossover between nodes)
VMs Load-Balanced between Nodes:
SBS 2011 Standard (SBS 2011 Premium) - 2 x 1GbE (teamed)
WS 2008 R2 for SQL + VMware vCenter (SBS 2011 Premium) - 1 x 1GbE
WS 2008 R2 for RDS - 1 x 1GbE
WS 2008 R2 for Lync Server - 1 x 1GbE
WHS 2011 - 2 x 1GbE (teamed)
Miscellaneous Temporarily-Run Testing VMs (mostly Windows 7) - 1 x 1GbE
(2 x on-board 1GbE for VMware use) [I'm thinking 1 for management and 1 crossover between nodes for vMotion? No idea what I should be doing here...]
Notes/Questions
- We have a MAPS + TechNet etc. for all the Microsoft production & testing licenses and for VMware we have NFR, so the cost of those are't an issue. All other software, though, would need to be acquired.
- VMware Networking & NIC allocation? (As mentioned above.)
- As per E26 versions of chassis, all drives have SAS2 MPIO implemented with a path to each head node.
- Is Solaris 11 the right choice for a ZFS storage OS to: share media storage directly to clients, maintain the data volume for SBS and hold the VM store for VMware? How do I go about implementing dual-instance redundancy?
- What do I need to do/have to allow for auto-vMotion of VMs from one node to another if one fails? How do I implement vMotion for auto-load-balancing?
- Are my specifications sufficient/overkill/appropriate?
- I'm sure more questions will come...
If any other information is required, please let me know! I appreciate all help and advice, especially as this setup is as much for learning (and I've got a lot of that to do) as it is for production and testing THANKS!
Last edited: