iamspartacus
n00b
- Joined
- Mar 15, 2016
- Messages
- 5
I'm specing out a home vSphere cluster for dual purposes. I want to learn more about VMware clustering yes but I also want to apply it as best I can to my home network needs as I find that's the best way to learn things.
My goal is to have redundancy for all my "production" VMs/Dockers (Linux running many media dockers) and Windows Server VMs (AD, DNS, etc.). I want to be able to take a single node offline and still have my VMs running with no downtime via vMotion.
From what I gather I can achieve this in two ways.
1. Build a vSphere cluster along side a dedicated storage server to store my VMs for use amongst 2+ ESXi hosts. This will allow me to take a single ESXi host down and, using vMotion, bring the VMs up on the second "fail-over" host. The upside to this is that I can spec out different hardware for the storage server and the ESXi hosts. The downside is that I have a single point of failure with regard to the storage server as if that is down my ESXi hosts become useless.
2. Build a vSAN cluster (I believe I need at least 3 hosts for this?) with identical storage on each (or at least 2 with the 3rd being a "monitor/maintenance" host??). This gives me the most redundancy as a single host going down will never take down my VMs. The problem is this is a much more expensive route as I need more of an All-in-One hardware solution that has both high enough CPU power (to do all the media transcoding I need) as well as good storage options.
Unless I'm not fully understanding the way a vSAN cluster is configured, I'm learning towards option one for the cost savings and because while I'd hate to have take my VMs offline to do maintenance on my storage server, it's not going to kill me like in an enterprise environment. I'm wondering what some of you VMware vets' thoughts might be on this scenario.
P.S. I have a Dell X1052 switch with 4 x SFP+ ports so I'd love to connect as many (if not all) of these servers via 10Gb. I'm looking mainly at the new SuperMicro Xeon D boards with on-board SFP+ ports. Also my plan is to use the yearly licensing provided with a VMUG subscription for this project.
My goal is to have redundancy for all my "production" VMs/Dockers (Linux running many media dockers) and Windows Server VMs (AD, DNS, etc.). I want to be able to take a single node offline and still have my VMs running with no downtime via vMotion.
From what I gather I can achieve this in two ways.
1. Build a vSphere cluster along side a dedicated storage server to store my VMs for use amongst 2+ ESXi hosts. This will allow me to take a single ESXi host down and, using vMotion, bring the VMs up on the second "fail-over" host. The upside to this is that I can spec out different hardware for the storage server and the ESXi hosts. The downside is that I have a single point of failure with regard to the storage server as if that is down my ESXi hosts become useless.
2. Build a vSAN cluster (I believe I need at least 3 hosts for this?) with identical storage on each (or at least 2 with the 3rd being a "monitor/maintenance" host??). This gives me the most redundancy as a single host going down will never take down my VMs. The problem is this is a much more expensive route as I need more of an All-in-One hardware solution that has both high enough CPU power (to do all the media transcoding I need) as well as good storage options.
Unless I'm not fully understanding the way a vSAN cluster is configured, I'm learning towards option one for the cost savings and because while I'd hate to have take my VMs offline to do maintenance on my storage server, it's not going to kill me like in an enterprise environment. I'm wondering what some of you VMware vets' thoughts might be on this scenario.
P.S. I have a Dell X1052 switch with 4 x SFP+ ports so I'd love to connect as many (if not all) of these servers via 10Gb. I'm looking mainly at the new SuperMicro Xeon D boards with on-board SFP+ ports. Also my plan is to use the yearly licensing provided with a VMUG subscription for this project.
Last edited: