Multi-host Virtualized Lab with shared storage (design stage)

XmagusX

Limp Gawd
Joined
Jun 12, 2003
Messages
268
Hey all.

Right now what I'm looking to build is a multi-system homelab with the following goals:

* Be able to take down any one node and remain functional
* Act as NAS for my workstations and media machines (~10TB will allow for growth)
* Rapidly deploy & destroy VMs for testing and experimentation
* Host persistent virtual infrastructure: local DNS, DHCP, SAMBA4/AD, OpenVPN, Gateway/Router, etc.
* The base OS on all systems is Linux (CentOS ideally)
* Probably some others that are not occurring to me at the moment.

Given these, goals, I have the following priorities:

1. Noise - this will be living in my house, and I have no desire to share quarters with something that sounds like a runway whenever system fans have to kick up past idle. Doesn't matter how cheap a 1U pizza boxes might be, if it sounds like a harrier, I'm going to take a pass.
2. Cost - with a lower total cost of ownership being more important than initial investment. Three years is what I tend to use as a break point. IE: if going with a processor that costs a hundred dollars more saves me three dollars a month (in power, cooling, etc), I'll go with that processor instead -- $3 x 36 months = $108 in savings over the life of the system.
3. Performance - I want the most value for my dollar that I can get, but this isn't meant to ever be any kind of production environment, and as such doesn't need to be screaming fast. Consumer grade hardware is most likely sufficient for my purposes.

What I am currently considering is three to four machines, one or two of which acting as shared storage (depending upon whether enclosure mirroring makes sense), and the other two acting as both oVirt nodes and Gluster bricks. The idea being that the shared storage machines would have some SSDs thrown into them, which is where the OS partitions of all VMs would live. The shared storage machines would be low-end systems otherwise; Intel g3258, 4-8gb RAM, onboard video, etc. The main question with regard to these seems to be what kind of shared storage. The two main contenders in my head at the moment are iSCSI over 1gbe or 4gb fibre using SCST. My SAN experience thus far has been iSCSI, but this seemed like a way to get my feet a bit wet with FC. The other two boxes would be oVirt nodes (hosted engine), with beefier Haswell processors (i5 - e3), at least 16gb of RAM, a cheap, small SSD for the OS, and a bunch of 7.2k SATA drives thrown in as well for the GlusterFS.

These are just my initial thoughts, and there's nothing in here that I'm really dead set on, so please feel free to offer any help, suggestions, or just flat out gut it and substitute something you think better meets my goals and priorities!

Thanks for any and all help!
 
Do you have a ballpark budget at least? Or looking for the cheapest option that fills all your needs? Also you didn't mention which type or virtualization you're wanting to use(ESXi, Hyper-V, etc.)
 
Under 2.5k ideally.

I want virtualization I can run on a Linux base OS, ideally (goal 5). OVirt, OpenStack, something of the like in that regard.
 
What's your ~end~ goal? What all do you want to run? Think application, not "I need a VM".
 
What's your ~end~ goal? What all do you want to run? Think application, not "I need a VM".

This will be a homelab.

* Rapidly deploy & destroy VMs for testing and experimentation
* Host persistent virtual infrastructure: local DNS, DHCP, SAMBA4/AD, OpenVPN, Gateway/Router, etc.
 
For that, I'd honestly start with just a bunch of disks and the standard linux NFS server, but I go for "simple" before anything else when it comes to homelabs for open source software.
 
That's more or less the current setup. What I'm looking to build is something more robust.
 
At the advice of some of the folks over in /r/homelab, I'm also considering a purely gluster based solution, where each node has both ssds and hdds, and oVirt uses gluster as the shared storage. Possibly some infiniband for cheap 10+gig between the nodes.

Thoughts?
 
you pick up hypervisor first then you go for virtual shared storage options

freebsd with nfs would work for everything except hyper-v

This will be a homelab.

* Rapidly deploy & destroy VMs for testing and experimentation
* Host persistent virtual infrastructure: local DNS, DHCP, SAMBA4/AD, OpenVPN, Gateway/Router, etc.
 
you pick up hypervisor first then you go for virtual shared storage options

freebsd with nfs would work for everything except hyper-v

For my purposes, I want the shared storage layer to work well with the virtualization layer, so I'm trying to find a pairing that complement one another well.

Unfortunately, FreeBSD NFS fails for:

* Be able to take down any one node and remain functional

BSD was something I looked into in the form of OpenFiler, though.
 
the only protocol all production hypervisors support equally is iscsi

hyper-v can do smb3 but not nfs

esxi can do nfs but not smb3

if vm-running option is ok for you give a try to these guys

https://www.starwindsoftware.com/starwind-virtual-san-free

but i'm not sure they support running outside hypervisor in their free version

worth asking

For my purposes, I want the shared storage layer to work well with the virtualization layer, so I'm trying to find a pairing that complement one another well.

Unfortunately, FreeBSD NFS fails for:

* Be able to take down any one node and remain functional

BSD was something I looked into in the form of OpenFiler, though.
 
...alternatively you may look at a non-commercial implementations of ha zfs similar to one here

http://www.high-availability.com/zfs-ha-plugin/

For my purposes, I want the shared storage layer to work well with the virtualization layer, so I'm trying to find a pairing that complement one another well.

Unfortunately, FreeBSD NFS fails for:

* Be able to take down any one node and remain functional

BSD was something I looked into in the form of OpenFiler, though.
 
Back
Top