vSAN or not for home vSphere cluster

Joined
Mar 15, 2016
Messages
5
I'm specing out a home vSphere cluster for dual purposes. I want to learn more about VMware clustering yes but I also want to apply it as best I can to my home network needs as I find that's the best way to learn things.

My goal is to have redundancy for all my "production" VMs/Dockers
(Linux running many media dockers) and Windows Server VMs (AD, DNS, etc.). I want to be able to take a single node offline and still have my VMs running with no downtime via vMotion.

From what I gather I can achieve this in two ways.

1. Build a vSphere cluster along side a dedicated storage server to store my VMs for use amongst 2+ ESXi hosts. This will allow me to take a single ESXi host down and, using vMotion, bring the VMs up on the second "fail-over" host. The upside to this is that I can spec out different hardware for the storage server and the ESXi hosts. The downside is that I have a single point of failure with regard to the storage server as if that is down my ESXi hosts become useless.

2. Build a vSAN cluster (I believe I need at least 3 hosts for this?) with identical storage on each (or at least 2 with the 3rd being a "monitor/maintenance" host??). This gives me the most redundancy as a single host going down will never take down my VMs. The problem is this is a much more expensive route as I need more of an All-in-One hardware solution that has both high enough CPU power (to do all the media transcoding I need) as well as good storage options.


Unless I'm not fully understanding the way a vSAN cluster is configured, I'm learning towards option one for the cost savings and because while I'd hate to have take my VMs offline to do maintenance on my storage server, it's not going to kill me like in an enterprise environment. I'm wondering what some of you VMware vets' thoughts might be on this scenario.

P.S. I have a Dell X1052 switch with 4 x SFP+ ports so I'd love to connect as many (if not all) of these servers via 10Gb. I'm looking mainly at the new SuperMicro Xeon D boards with on-board SFP+ ports. Also my plan is to use the yearly licensing provided with a VMUG subscription for this project.
 
Last edited:
I'd vote shared storage--preferably a Synology NAS serving up iSCSI or NFS.

VSAN is pretty slick, but there is a lot to go wrong with it in a home lab. You also need 3 hosts vs. 2 with shared storage.
 
If I decide to go the shared storage route (I'm leaning that way currently), are there any specific requirements (specifically CPU) for vMotion to work seamlessly between hosts? Would a host running a Xeon D would fine with one running a Xeon E3 v5?
 
EVC needs to be enabled.

I tried this my Xeon E5 2630v3 box and a Xeon D 1540 box. Will stop you until you enable EVC.

https://kb.vmware.com/selfservice/m...nguage=en_US&cmd=displayKC&externalId=1003212

I am not really to the point yet in my home lab that I want to mess with clustering. Mainly I am trying to keep 1 system separated from lab so when I go playing with things the whole house doesn't go down. Done this twice now by doing various tasks while setting up vDS or moving around my DNS server. I am able to migrate if the VM's are powered off however, just can't do it while they are running. Also if you share storage you can just remove the VM from the inventory of one host and then browse the datastore on the other host and add it that way as well.
 
I built out my home cluster (3 servers) with vSAN in mind. It's given me a shot to play with it but it is not my primary means of storage. I use a couple of Synology boxes for iSCSI.
 
I'm still debaiting this decision. Luckily the new wave Xeon D boards aren't in stock yet which has given me some time. It's between 2-node ROBO vSAN with witness appliance or a single storage appliance to be shared amongst 2 ESXi hosts.

I got an amazing deal on 6 x 400GB Hitachi USSL Enterprise SSD's (38 PB write endurance) so I have those at my disposal along with 4 x 480GB Intl 730's. If I were to go the vSAN route I'd need swap the 730's for some enterprise drives to use as my vSAN cache or I'd need swap a few of the USSL's for prosumer SSDs to use as capacity. If I go the shared storage route I can use all 6 Hitachi's to create one good sized zfs pool.

Every node in my network (outside of wireless) will be connected via dual SFP+ ports (so 20Gbps) including my workstation PC so I'd love to configure my storage for the best possible sequential read/write performance as I often do a lot of large copies (50-250GB of video files).
 
Back
Top