Hey all.
Right now what I'm looking to build is a multi-system homelab with the following goals:
* Be able to take down any one node and remain functional
* Act as NAS for my workstations and media machines (~10TB will allow for growth)
* Rapidly deploy & destroy VMs for testing and experimentation
* Host persistent virtual infrastructure: local DNS, DHCP, SAMBA4/AD, OpenVPN, Gateway/Router, etc.
* The base OS on all systems is Linux (CentOS ideally)
* Probably some others that are not occurring to me at the moment.
Given these, goals, I have the following priorities:
1. Noise - this will be living in my house, and I have no desire to share quarters with something that sounds like a runway whenever system fans have to kick up past idle. Doesn't matter how cheap a 1U pizza boxes might be, if it sounds like a harrier, I'm going to take a pass.
2. Cost - with a lower total cost of ownership being more important than initial investment. Three years is what I tend to use as a break point. IE: if going with a processor that costs a hundred dollars more saves me three dollars a month (in power, cooling, etc), I'll go with that processor instead -- $3 x 36 months = $108 in savings over the life of the system.
3. Performance - I want the most value for my dollar that I can get, but this isn't meant to ever be any kind of production environment, and as such doesn't need to be screaming fast. Consumer grade hardware is most likely sufficient for my purposes.
What I am currently considering is three to four machines, one or two of which acting as shared storage (depending upon whether enclosure mirroring makes sense), and the other two acting as both oVirt nodes and Gluster bricks. The idea being that the shared storage machines would have some SSDs thrown into them, which is where the OS partitions of all VMs would live. The shared storage machines would be low-end systems otherwise; Intel g3258, 4-8gb RAM, onboard video, etc. The main question with regard to these seems to be what kind of shared storage. The two main contenders in my head at the moment are iSCSI over 1gbe or 4gb fibre using SCST. My SAN experience thus far has been iSCSI, but this seemed like a way to get my feet a bit wet with FC. The other two boxes would be oVirt nodes (hosted engine), with beefier Haswell processors (i5 - e3), at least 16gb of RAM, a cheap, small SSD for the OS, and a bunch of 7.2k SATA drives thrown in as well for the GlusterFS.
These are just my initial thoughts, and there's nothing in here that I'm really dead set on, so please feel free to offer any help, suggestions, or just flat out gut it and substitute something you think better meets my goals and priorities!
Thanks for any and all help!
Right now what I'm looking to build is a multi-system homelab with the following goals:
* Be able to take down any one node and remain functional
* Act as NAS for my workstations and media machines (~10TB will allow for growth)
* Rapidly deploy & destroy VMs for testing and experimentation
* Host persistent virtual infrastructure: local DNS, DHCP, SAMBA4/AD, OpenVPN, Gateway/Router, etc.
* The base OS on all systems is Linux (CentOS ideally)
* Probably some others that are not occurring to me at the moment.
Given these, goals, I have the following priorities:
1. Noise - this will be living in my house, and I have no desire to share quarters with something that sounds like a runway whenever system fans have to kick up past idle. Doesn't matter how cheap a 1U pizza boxes might be, if it sounds like a harrier, I'm going to take a pass.
2. Cost - with a lower total cost of ownership being more important than initial investment. Three years is what I tend to use as a break point. IE: if going with a processor that costs a hundred dollars more saves me three dollars a month (in power, cooling, etc), I'll go with that processor instead -- $3 x 36 months = $108 in savings over the life of the system.
3. Performance - I want the most value for my dollar that I can get, but this isn't meant to ever be any kind of production environment, and as such doesn't need to be screaming fast. Consumer grade hardware is most likely sufficient for my purposes.
What I am currently considering is three to four machines, one or two of which acting as shared storage (depending upon whether enclosure mirroring makes sense), and the other two acting as both oVirt nodes and Gluster bricks. The idea being that the shared storage machines would have some SSDs thrown into them, which is where the OS partitions of all VMs would live. The shared storage machines would be low-end systems otherwise; Intel g3258, 4-8gb RAM, onboard video, etc. The main question with regard to these seems to be what kind of shared storage. The two main contenders in my head at the moment are iSCSI over 1gbe or 4gb fibre using SCST. My SAN experience thus far has been iSCSI, but this seemed like a way to get my feet a bit wet with FC. The other two boxes would be oVirt nodes (hosted engine), with beefier Haswell processors (i5 - e3), at least 16gb of RAM, a cheap, small SSD for the OS, and a bunch of 7.2k SATA drives thrown in as well for the GlusterFS.
These are just my initial thoughts, and there's nothing in here that I'm really dead set on, so please feel free to offer any help, suggestions, or just flat out gut it and substitute something you think better meets my goals and priorities!
Thanks for any and all help!