Help with Multi-Node Home Hyper-V Cluster

USMCGrunt

2[H]4U
Joined
Mar 19, 2010
Messages
3,103
I've got a bunch of old parts lying around that I can use for a small home lab. I've previously had a 2-node cluster setup using all this stuff and a couple other parts. I recently disassembled it to build a gaming computer for my son (pulled a 4th gen i7 out of it). I'm wanting to rebuild the lab because there are things I want to play with but I am wondering if anybody out there with more experience than I, could give me a better config that what I cobbled together.

Here are all the parts I currently have at my disposal. I'm pretty well versed in networking and infrastructure so the technical configuration is something I, and my Google-Fu, are capable of. I'm just unsure of the best configuration as far as storage goes. I'm also trying to make due with what I got but if there's a part or two that would make an immense difference, I'm up for considering it.

Compute
Gigabyte Z68 with Intel i5-2500k
MSI P67A with Intel i5-2500k
MSI 890FA with AMD Phenom 965

x2 Corsair 16GB RAM kits (4x4GB)
x1 Corsair 8GB RAM kit (4x2GB)


Networking
x4 Intel i350 T4 4-port NICs
NetGear GS108 8-port Gbe switch


Storage
Adaptec 3805 w/128MB cache x2 8087 to 4-port SATA (SATA2 interface)

HDD
x8 1TB
x4 500GB
x5 150GB VelociRaptor

SSD
750GB Crucial MX300
60GB OCZ Vertex 3


(Yes I realize this is a bunch of old shit but it's still capable of doing small workloads)
 
You really need a storage backend if you want a true HV cluster with failover etc. I'm running freenas on a set of raid10 intel 3700's setup with ISCI on a 10G lan. Running 10+ VMs including exchange, pfsense, nextcloud etc, with zero performance issues. This is all on rack mounted server hardware though.

Another potential issue is the different cpu architectures. There are extra settings required for the VM to be migrated between a host running an intel cpu and one running an amd cpu. I believe it's just a checkbox, but you will have to find it and set it if you want this setup to work.
 
In my previous setup, the two Intel processors were the compute nodes and AMD was the storage backend. I used the Adaptec RAID card for a RAID 10 (or maybe just RAID 1, cant remember) of the 8 1TB mechanical drives and then used Server 2012 to handle NIC teaming and iSCSI with MPIO. The setup seemed to run ok but storage throughput was definitely an issue. I even reconfigured it to use just the Crucial SSD through the motherboard's SATA chip but it didn't really anything. I've priced out the cost of a few 10Gb coax NICs, I would just prefer not to spend a couple hundred on that if I didn't absolutely have to.
 
In my previous setup, the two Intel processors were the compute nodes and AMD was the storage backend. I used the Adaptec RAID card for a RAID 10 (or maybe just RAID 1, cant remember) of the 8 1TB mechanical drives and then used Server 2012 to handle NIC teaming and iSCSI with MPIO. The setup seemed to run ok but storage throughput was definitely an issue. I even reconfigured it to use just the Crucial SSD through the motherboard's SATA chip but it didn't really anything. I've priced out the cost of a few 10Gb coax NICs, I would just prefer not to spend a couple hundred on that if I didn't absolutely have to.

If you want an actual cluster that doesn't have performance issues, that's pretty much your only option. You can get the Intel X520 single port for around 50 or the dual for under 100. You can set the storage server to use 10G lan for ISCSI, then 1G for all other file sharing, then directly connect the VM host to the storage server via SFP cable. Then you won't need a 10G switch. You will be limited to 2 VM hosts though, unless you pick up one of those 4 port 10g cards from hot deals for your storage node.
 
If you want an actual cluster that doesn't have performance issues, that's pretty much your only option. You can get the Intel X520 single port for around 50 or the dual for under 100. You can set the storage server to use 10G lan for ISCSI, then 1G for all other file sharing, then directly connect the VM host to the storage server via SFP cable. Then you won't need a 10G switch. You will be limited to 2 VM hosts though, unless you pick up one of those 4 port 10g cards from hot deals for your storage node.
Where are you seeing those prices on those NICs? I am seeing them 20-40 more than that at their cheapest.
 
Where are you seeing those prices on those NICs? I am seeing them 20-40 more than that at their cheapest.
They're on ebay for the Intel branded model (ordered from China/Israel).
https://www.ebay.com/itm/Intel-Ethernet-Converged-Network-Adapter-X520-LR1-E10G41BFLR/173086243602
https://www.ebay.com/itm/INTEL-8259...hernet-Converged-Network-Adapter/183396451138

I'd recommend the Mellanox connect-x cards though, you can get a pair of single port cards for about $40 from Amazon I haven't tested them with hyper-v, but they work great with ESXI: http://a.co/d/90g4K9h

You would definitely need to use the AMD for the storage as you likely wouldnt have much headroom in the sandy bridge chips, they only have 16 pci-e lanes total.
So two 8x cards would potentially saturate the bus if your board could even support 2x8 config leaving no room for the storage cards (the z68 would, p67 maybe/maybe not).
You'll need a dual port card in either system likely unless you wanna buy a pricey switch.
Otherwise you'd have to upgrade whichever system you were using for the storage for additional pcie lanes.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
Where are you seeing those prices on those NICs? I am seeing them 20-40 more than that at their cheapest.

Ordered mine from Natex.us but appears they are out of the dual port intel.


They're on ebay for the Intel branded model (ordered from China/Israel).
https://www.ebay.com/itm/Intel-Ethernet-Converged-Network-Adapter-X520-LR1-E10G41BFLR/173086243602
https://www.ebay.com/itm/INTEL-8259...hernet-Converged-Network-Adapter/183396451138

I'd recommend the Mellanox connect-x cards though, you can get a pair of single port cards for about $40 from Amazon I haven't tested them with hyper-v, but they work great with ESXI: http://a.co/d/90g4K9h

You would definitely need to use the AMD for the storage as you likely wouldnt have much headroom in the sandy bridge chips, they only have 16 pci-e lanes total.
So two 8x cards would potentially saturate the bus if your board could even support 2x8 config leaving no room for the storage cards (the z68 would, p67 maybe/maybe not).
You'll need a dual port card in either system likely unless you wanna buy a pricey switch.
Otherwise you'd have to upgrade whichever system you were using for the storage for additional pcie lanes.


Do not get any networking equipment from china. You have zero idea if it has been modified. Site I posted above sells legit Intel cards, so no reason to risk it with chinese versions.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
Ordered mine from Natex.us but appears they are out of the dual port intel.





Do not get any networking equipment from china. You have zero idea if it has been modified. Site I posted above sells legit Intel cards, so no reason to risk it with chinese versions.
I work IT Security by day (nerdy gamer by night), I have no intention of buying anything directly from China. That said, most of the shit comes out of China anyways, lol.
 
Back
Top