Home lab and ultimate cost savings

Trunkton

n00b
Joined
Feb 21, 2012
Messages
3
Power bills main concern because of how often this may sit in disuse while away.

Considering 10 VM host running Server 2012 Hyper-V Core and later a project where these VM's get upgraded to Server 2012:
-2x Win Server 2008 R2 PDC (for merging both forests later on) ----2GB + 2GB
-2x Win Server 2008 R2 Backup DC ---- 2GB + 2GB
-Win Exchange 2010 ---- 6GB
-Win Server 2008 R2 (File Server) ---- 4GB
-Testing Linux options like NGINX web pages and Atlassian stuff ---- 8GB
-2 user-facing VM's Win7 x86 + x64 for testing GPO applications --- 2GB + 3GB
TOTAL MEMORY REQ: Approx 32GB

Biggest concern is I get something and realize by the 4-5th VM it won't do.

I like the Intel NUC or GIGABYTE BRIX but they're limited to 16GB.. Unless I get 2 of them and hope its got enough processing power.. Interested in learning the workings of the software. Perhaps an iSCSI session to a WHS 2011 host could work but I wouldn't know yet til I try to get 2 low power devices clustered if that's how this plays out.

Any suggestions?
 
Last edited:
You can build good low power boxes. My lab hosts each idle at around 35w so with all 3 going and me not suing them I'm still just a bit over 100w. That's not bad at all. Any modern small setup should be the same or even better.

EDIT: Note, the only storage in my servers are SSDs for caching. Everything is held on Synology NAS.
 
Yeah power is actually not that bad on a lot of the newer hardware. Get some Haswell based stuff and it is very power efficient.

I run Ivy Bridge i3 Hosts (x2) with a bunch of disks and the whole setup runs around 150w.

Hard drives are actually one of the biggest offenders in setups like this. If you have a lot of them, they really start to add up on power consumption.
 
With the latest Haswell release, you can easily hit 30W idle per host. An i5 4570S has low power usage while still maintaining the virtual goodies. The trick is in the spindles as Grentz already mentioned. You didnt mention storage so I guess its going to be NAS/SAN?
 
This gives me enough confidence to also look into the haswell-based BRIX or NUC x2 for a start with local chip-based HDD to run just the Hypervisor. Thanks a lot!

SantaSCSI: Currently running a Norco 4224 JBOD (12HDD's installed), i3 CPU with WHS 2011 + StableBit DrivePool. ~Purchased at a time when I was less power conscious.~

Considering powering it off in favor of a Synology DS414 with 4x 4TB WD Red drives attached in link aggregation to a gigabit switch. Biggest concern is wondering if it can handle that many VMs.
 
Really depends on what you want to do, but 4x4TB reds will give you less IOPS and MB/s than your current Norco setup. The question is how tolerant you are of disk performance.

You also can't really compare the two since the spindle setup is totally different. The syno will draw less power, but at lower performance.

Spindown might be able to save you some dollars on the disks, but all in all disks are just plain power hogs if compared to other components.
 
Thanks for taking the time to respond!

Now that ya mention it the Syno won't do with that kinda performance and VM's needed. To ensure higher IOPS I may upgrade current Norco from WHS2011 to Server 2012, drop the 1-2TB drives for 4TB WD Green or Reds and add a second Pool for VM's. Perhaps SSD's in second pool for highest performance or a couple WD Blacks depending what the VM capacity/cost requirements add up to

EDIT: IOPS seem to be important in reading more I'm out of my depth here. Further research is needed before I can say whether a NAS/SAN will be ideal VS locally attached single SSD for example. The only thing I had in mind with iSCSI functionality was just clustering and failover experimentation but not sure if its worth the cost yet

EDIT 2: A 1-bay Synology DiskStation DS114 NAS is looking nice, wonder what performance with a single 512GB SSD could be. It seems to readily support vmware/hyperV/citrix.

EDIT 3: DS214+ supports 2 drives + link aggregation and can read/write iSCSI @ 100MB... With all VM's IDLE except for whatevers being installed/accessed at one time I'm starting to wonder if maybe this is the best way to go using Block Level iSCSI + Raid 0 which may get even better performance. Again, home lab its not a big deal if it all chokes in a year, the temp 6 month license would've expired before then.
 
Last edited:
A two-bay Synology with SSDs will do very well. Just use NFS, not iSCSI. Synology has a performance problem with iSCSI right now.
 
Back
Top