2012 Lab Project - Should I scrap the ESXi/ZFS All in One? Whats next?

idea

Gawd
Joined
Jan 24, 2005
Messages
615
Last year I did the All in One but I think I'm ready to move on and do something more traditional. My favorite features of AIO is it's simple, fits in one energy efficient, quiet, and cool chassis. My least favorite features of AIO is that it gets no respect from my colleagues since it's very unusual and would never fly in a production environment. Let's be honest, it does present a few quirks and annoyances. But most of all: I get bored too easily and I want to build something new :(

Also, I bought a 25U rack. So now I have an excuse to rebuild the setup with rack mount equipment

At first I think I'll do:
  • Managed GigE switch and get as much iSCSI as I can get going
  • OR I might get a Fibre Channel SAN going with some salvaged parts from work. I have to see what is available
  • One ESX server (1U or 2U LGA 1155 with 16GB RAM)
  • One NAS/SAN server (3U or 4U with 8-12x 2TB ZFS)
  • One NAS/SAN backup server (3U or 4U with 8-12x 2TB ZFS)
  • Later on I'd like to add another ESX server for vMotion
 
Last edited:
Why not just NFS datastores? Unless application requires block access to storage, I'll keep it simple.

I've been working on many large scale VMware environments, and by large, I mean hundreds ESX servers with thousands VMs. Pretty much everyone is going NFS if they can.
 
Why not just NFS datastores? Unless application requires block access to storage, I'll keep it simple.

I've been working on many large scale VMware environments, and by large, I mean hundreds ESX servers with thousands VMs. Pretty much everyone is going NFS if they can.

+1
Especially if you use Veeam as backup cause the SAN alternatives are stupid expensive.
 
But doesn't NAS have more overheard on the wire than SAN? I just wanna be cool and say "I have SAN in my house!" But I might do NFS, sure why not. As long as I can aggregate GigE ports.
 
NFS, iSCSI, and FC all perform very closely.

My opinion is that if you truly want to learn VMware, you should have two hosts and some sort of shared storage. Simply running a single host won't teach you anything.

As for what SAN to use, I've been running an OpenIndiana whitebox at home which is serving up iSCSI and NFS and I've been nothing but impressed by its performance.

My lab consists of two hosts each with a 6 core AMD CPU and 32GB of RAM and the ZFS whitebox has a dual core and 32GB of RAM.

Outside of DirectI/O (which I'd steer customers clear of using anyway) I can do anything in my lab that someone would do in a production environment.
 
But doesn't NAS have more overheard on the wire than SAN? I just wanna be cool and say "I have SAN in my house!" But I might do NFS, sure why not. As long as I can aggregate GigE ports.

Yes it's cool,:D and in some cases you need SAN, ie. RDM and clustering. If you can get FC equipment for free, good to try. Otherwise, iSCSI works very similar except no zoning. You can't aggregate GigE, but you can manually balance by connecting to datastores through different nic.

However, in the end, you will value simplicity and flexibility of NFS. Use NFS as primary protocol and use block if required. There are always more problem with SAN especially during upgrades.

You can get a dedicated storage server, a beefier ESX box and run multiple virtualized ESXi inside it.
 
NFS, iSCSI, and FC all perform very closely.

My opinion is that if you truly want to learn VMware, you should have two hosts and some sort of shared storage. Simply running a single host won't teach you anything.

As for what SAN to use, I've been running an OpenIndiana whitebox at home which is serving up iSCSI and NFS and I've been nothing but impressed by its performance.

My lab consists of two hosts each with a 6 core AMD CPU and 32GB of RAM and the ZFS whitebox has a dual core and 32GB of RAM.

Outside of DirectI/O (which I'd steer customers clear of using anyway) I can do anything in my lab that someone would do in a production environment.

Thanks, I plan on building two ESX hosts based off of LGA 1155 Xeon's. Not as beefy as yours, but it will do the job for me. I still have to figure out where to obtain ESX software licensing. ESXi wouldn't give me all the cool features I want to try out
 
Yes it's cool,:D and in some cases you need SAN, ie. RDM and clustering. If you can get FC equipment for free, good to try. Otherwise, iSCSI works very similar except no zoning. You can't aggregate GigE, but you can manually balance by connecting to datastores through different nic.

However, in the end, you will value simplicity and flexibility of NFS. Use NFS as primary protocol and use block if required. There are always more problem with SAN especially during upgrades.

You can get a dedicated storage server, a beefier ESX box and run multiple virtualized ESXi inside it.

I agree with you. I will definitely look into NFS storage. I doubt I'd be happy with 1Gbps throughput for a datastore though. I have to start googling about NFS with LACP/aggregation/teaming/bonding whatever you call it
 
Thanks, I plan on building two ESX hosts based off of LGA 1155 Xeon's. Not as beefy as yours, but it will do the job for me. I still have to figure out where to obtain ESX software licensing. ESXi wouldn't give me all the cool features I want to try out

ESX is gone. ESXi is all that remains now. It has all the same features just no Service Console.

As for licensing, you have two options:

1. Acquire a trial key every 3 months as they expire
2. Wait and hope for VMware to offer a "Technet" like subscription service
 
ESX is gone. ESXi is all that remains now. It has all the same features just no Service Console.

As for licensing, you have two options:

1. Acquire a trial key every 3 months as they expire
2. Wait and hope for VMware to offer a "Technet" like subscription service

Can it do vMotion and advanced stuff?
 
Can it do vMotion and advanced stuff?

ESXi alone doesn't do much, you need to have vCenter installed on a Windows machine (physical or VM) or you can download the vCenter appliance to run as a VM.

Once you get a trial key and license vCenter and the hosts then you can VMotion, Storage VMotion, DRS, HA, everything.
 
ESXi alone doesn't do much, you need to have vCenter installed on a Windows machine (physical or VM) or you can download the vCenter appliance to run as a VM.

Once you get a trial key and license vCenter and the hosts then you can VMotion, Storage VMotion, DRS, HA, everything.

Sounds good, thanks.

Anyone else have any input on this crazy idea of mine? To have an ESX cluster at home?
 
Yes you can LACP for NFS, I use QNAP and it work very well. Basically one gige will become send one recieve tho. I get actually much higher IO then iscsi.

I have a cluster at home. I usually tho shut it down cause it costs to much in electricity.
 
I have two ESX hosts here at home. See sig for details. Both talk to the NAS that I have in my sig - which holds the VMDKs/VMXes. In total I have 9 VMs that are 24x7 and a handul more that are powered on/off as I want to play around with them. I don't think the idea is at all crazy if you intend to learn ESX well.
 
I'm voting "not crazy" as well. I have two Dell PE2950's with 16GB RAM and a couple iSCSI NAS units keeping my basement warm. :)
 
I would venture most of the people that frequent this sub-forum have a lab at home in the form of either an all-in-one of a multi-chassis lab. I find it hard to believe anyone that deals with VMware on a regular basis doesn't have a lab to test in.
 
Not crazy at all. You have to have a lab I think. I've got Two R210IIs and 1 T410 for shared storage/VMWare
 
That's impressive for the $$. Is there a hardware raid card in there or what?
No PERC included, nor any drives, but all the caddies come with. I just picked up another PERC 5/i and dropped it in for mine. $50 for PERC w/ BBU, then another $10 cable to connect it to the backplane. I only needed one drive in mine, but just for the idea of future expansion I went ahead and dropped the PERC in.

Edit: The seller will actually include 1 PERC and backplane cable for $100 as well, if you'd rather just get it all from one source. I think you need two PERC 5/i or 6/is though for all 12 drives.
 
What do the internal fans look like on these? Quiet enough for a small home office? At that price if they are mostly compatible with ESXi 5, I may bite.
Nope, not at all quiet enough for home office use. It's a rack server built to go into a rack in a DC, not in your home office. You might be able to replace them, but I wouldn't know what with. I'll have to walk down and double check, but IIRC these are similar to my 2950 in that they're the hot-swap variety. I am running ESX 5.0U1 on mine without issues, though I'm using a PCI Intel NIC, not the onboard ones.
 
Nope, not at all quiet enough for home office use. It's a rack server built to go into a rack in a DC, not in your home office. You might be able to replace them, but I wouldn't know what with. I'll have to walk down and double check, but IIRC these are similar to my 2950 in that they're the hot-swap variety. I am running ESX 5.0U1 on mine without issues, though I'm using a PCI Intel NIC, not the onboard ones.

I assumed that was going to be the case. The 1950s I had before were way to loud and heat mongers. I can't go back down that road again. I love me some datacenter, but only when its in a datacenter.
 
I assumed that was going to be the case. The 1950s I had before were way to loud and heat mongers. I can't go back down that road again. I love me some datacenter, but only when its in a datacenter.
It's not quite as loud as a 1950, and it's a hair quieter than my 2950, but still not something I'd put in most home offices - the one exception being that if you have a closet you can close it in, you will not have to deal with too much noise at all, and it doesn't put out TOO much heat IMO.
 
What size memory sticks does that come with?
Mine came with qty16, 2GB sticks. I think 533MHz, but I'd have to verify. It has 16 total slots for RAM, so if you wanted to upgrade it (I think it'd be 64GB max), you'd have to replace all the RAM in it, and you're probably looking at $800-$1k to do so?
 
Nope, not at all quiet enough for home office use. It's a rack server built to go into a rack in a DC, not in your home office. You might be able to replace them, but I wouldn't know what with. I'll have to walk down and double check, but IIRC these are similar to my 2950 in that they're the hot-swap variety. I am running ESX 5.0U1 on mine without issues, though I'm using a PCI Intel NIC, not the onboard ones.

47dB according to C2100 specs.Their tower server, the T110 is 41dB.

It will be noticeable all the time, but not insane. It could be worse. I used to sleep next to a WBK38 and a Delta FFB1212EH. I SAID, I USED TO SLEEP NEXT TO A WBK38 AND A DELTA FFB1212EH! WHAT? NO, I DON'T LIKE GRAVY.
 
It looks like there's 6 SATA ports on board? You might just need one Perc 5i/6i for 8 drives and then connect 4 drives onboard if you just need ports. It looks like there are 2 PCIe slots? for two Controller cards.
 
I bought one of these and have still been playing with setting it up. There are 2 80x80x38 fans that do 100CFM and about 65db and another 60x60x38 fan (which I forget its specs). The thing is loud, 65+db. It idles and keeps about 29'C for temp.

I went and replaced the 2 80mm fans with 3pins that do about 40CFM and 34db. Just unplugged the 60mm since it blows over the riser card. Now, at idle, it's at 34'C and relatively quiet.
 
If you already have an All-In-One... why not nest ESXi hosts... use your NAS/SAN from your All-In-On to serve DataStores to the nested ESXi hosts. Run a few light VMs on them just so you can play with vMotion, nothing "home production" on the nested stuff... I did a quick test of this, using the vmware vServer VM, added the 2 nested ESXi hosts to it and was able to manage all 3 hosts nicely (it included the real ESXi host)...

Your All-In-One is always on and your nested ESXi host can always be shutdown or left on as you see fit, your "home production" setup would never be affected.

No money spent on new hardware, get to learn vMotion and the likes at VMCI speeds, no increase heat or elect bill.

Not as fun to "build", but maybe more practical.
 
Last edited:
I bought one of these and have still been playing with setting it up. There are 2 80x80x38 fans that do 100CFM and about 65db and another 60x60x38 fan (which I forget its specs). The thing is loud, 65+db. It idles and keeps about 29'C for temp.

I went and replaced the 2 80mm fans with 3pins that do about 40CFM and 34db. Just unplugged the 60mm since it blows over the riser card. Now, at idle, it's at 34'C and relatively quiet.

How much power do these servers use?

Currently I am using 4 x SC1435's w/ 2 x 8376HE 2.3Ghz Quad's & 32GB RAM (no drives). Booting from USB sticks and running from Synology DS1511 and DS1512+.

I'd love to get a little more modern CPU, pickup a few Ghz and possibly use less power.

Anyone have any measurements on power utilization?

Also - do the onboard nics work in ESXi5?
 
We got one of the 1U ones where I work. It's pretty quiet for a 1U and I couldn't even tell it was on over our portable air conditioner. It came with 533MHz ram as well.

Inside shots:

dell1u_wcover.jpg

dell1u_wocover.jpg
 
Thinking about picking up some of these. Does anyone have a Kill-A-Watt they can check the power usage with no drives? Would be very much appreciated
 
does the second proc work in vsphere? i thought it was limited to one cpu and no limit on cores with the free version
 
does the second proc work in vsphere? i thought it was limited to one cpu and no limit on cores with the free version
I've got dual and quad socket boxes with the free version of ESXi (5.0 and 4.1U2 respectively) with no issues.
 
Back
Top