My first ESXi Whitebox - hardware purchased!

Martian

n00b
Joined
Feb 21, 2012
Messages
48
Update with final purchased hardware - I decided against the Supermicro + Xeon combo.

After months of researching I've finally pulled the trigger on hardware for my all-in-one box!

1 x Intel BOXDQ67SWB3 LGA 1155 Intel Q67 SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard - $125 + $7.56 shipping = $132.56
1 x Intel Core i7-2600 Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor - $200 + tax / shipping from Microcenter - My brother lives near one (Thanks Netwerkz101 for the deal)
2 x G.SKILL Ripjaws X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory Model - 2 x $110 = $220
1 x Fractal Design Define Mini Black Micro ATX Silent PC Computer Case - $110 + $20 shipping = $130
2 x Nexus PWM Series D12SL-12PWM 120mm Case Fan - 2 x $12 + $2 shipping each = $28.00
1 x IBM BR10i - $42.00 (eBay)
2 x SFF-8087 to 4x SATA forward breakout cable - 2 x $9.23 + $2.12 shipping = $20.58

Total = ~$773.14 + tax /shipping on the processor

I already have:
1 x Antec TruePower 550 Watt Power Supply
2 x SAMSUNG Spinpoint F3 HD103SJ 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive - ZFS mirror
1 or 2 120 GB Western Digital drive(s) - local datastore (for Illumian or OI + napp-it VM)
1 x 2GB USB 2.0 Flash Drive - connected to motherboard internal USB to boot ESXi


The Plan:
This all-in-one ESXi box will originally replace an old Core2duo based desktop running Linux presently handling my web server, e-mail server, file sharing, and music streaming server. I also plan to host a few test and development VMs. Probably no more than 2 or 3 at a time.

Later I hope to move my Myth TV backend (HDHomerun tuners) onto it with a 2nd mirrored ZFS pool. I'd also like to run a pfSense VM as my router / firewall / DNS / DHCP (using the 2nd NIC).

I know that the two 1 TB drives are pretty underwhelming compared to many setups but honestly they more than meet my needs at present and adding a third disk to the pool for more IOPS or another mirror for more space should be trivial down the line.

I'm really looking forward to the peace of mind of being able to do snapshots before updates / changes in case things go badly!

I'm also trying to decide if I want to try to mirror the local datastore. Re-creating the NAS OS wouldn't be the end of the world but it wouldn't be ideal either. I know FakeRAID is not an option and I'm too cheap to shell out for an enterprise class RAID controller. That pretty much leaves my only option to create two separate datastores on two separate SATA disks and create a software RAID mirror, in the storage OS, of two virtual disks. I'm not entirely sure how well that would work out in the end though...?

The only other affordable mirroring option I've found would be to attach this (SYBA SY-PEX40045 SATA II (3.0Gb/s) 1:2 (2x1) Internal SATA II Port Multiplier) to the local HBA, but I'm not sure how well that would work out either.

Thanks to all who have contributed here. I've learned so much from the wealth of information!

Martian
 
Last edited:
So once you get the second pair of hard drives you're looking at spending almost $900?

Why not simply get a cheap AMD 6 core CPU, desktop mobo, and 32GB of RAM plus a dual port Intel NIC off ebay for $50. If you have a Microcenter nearby you could probably pull this off for $600 depending on how cheap you can find the RAM.

With the remaining money get a dual core AMD, 16GB RAM, mobo, another Intel dual port NIC, and build a dedicated ZFS SAN rather than crippling it by running it as a VM and permanently tying it to your ESXi host.

If you're going to spend the money, do it right. You won't find any business production environment running an "all in one" ESXi server so don't bother with it.

Later on save up a little more cash and build a second ESXi box and then you're running a real ESXi cluster and can fart around with all the bells and whistles.

EDIT:

Even better, get a pair of cheap AMD quad cores and build a pair of hosts with 16GB of RAM and a dual port Intel NIC each. It won't cost much more than the 32GB host. Then you can begin playing with HA, DRS, etc. right away. This is what I run at home and it's a massive help to have a "production" environment at home. The next purchase you should eye is a managed switch so you can VLAN and run everything on a dVS.
 
Last edited:
I seriously considered the six and eight core AMD Bulldozer processors but they are terribly power hungry compared to the Xeon - especially at idle. I'd piss away the $200 I would save up front just in my electric bill over the next 2-3 years. I also seriously considered an i5 2500 on an Intel motherboard but ultimately decided that Hyperthreading, IPMI, and lower power consumption with the Xeon build was worth the extra money. I don't need 32 GB of RAM right now and if / when I do I expect 8 GB ECC sticks will be more affordable.

A separate SAN box was never an option. I'm fully aware that is the "right" way to do it but I have no need for it right now and am unwilling to dedicate the extra space, hardware, and power (electric bill) that it would require, not to mention the extra networking. I have all the hardware (including the drives) to do it, it's just not needed.

My goal of using ESXi is not high availability, the goal is the ability to snapshot my systems before I #%&$ with them so if and when things go horribly wrong I have a "undo" button. I also want to be able to test things in a virtual environment and not have to set up another physical machine. I'm maintaining seven computers (server, MythTV backend, three MythTV frontends, a desktop and a laptop) right now plus whatever else I'm tinkering with at the time. I need less, not more.

Martian
 
I seriously considered the six and eight core AMD Bulldozer processors but they are terribly power hungry compared to the Xeon - especially at idle. I'd piss away the $200 I would save up front just in my electric bill over the next 2-3 years. I also seriously considered an i5 2500 on an Intel motherboard but ultimately decided that Hyperthreading, IPMI, and lower power consumption with the Xeon build was worth the extra money. I don't need 32 GB of RAM right now and if / when I do I expect 8 GB ECC sticks will be more affordable.

A separate SAN box was never an option. I'm fully aware that is the "right" way to do it but I have no need for it right now and am unwilling to dedicate the extra space, hardware, and power (electric bill) that it would require, not to mention the extra networking. I have all the hardware (including the drives) to do it, it's just not needed.

My goal of using ESXi is not high availability, the goal is the ability to snapshot my systems before I #%&$ with them so if and when things go horribly wrong I have a "undo" button. I also want to be able to test things in a virtual environment and not have to set up another physical machine. I'm maintaining seven computers (server, MythTV backend, three MythTV frontends, a desktop and a laptop) right now plus whatever else I'm tinkering with at the time. I need less, not more.

Martian

You can do all that with VMware Workstation then.

I just don't understand the point of taking an enterprise product, ESXi, and totally crippling it to run an "all in one" system that really offers no benefit vs. running Hyper-V or VMware Workstation.

*shrug*

You will get the power savings and consolidation you want but if you want to really learn ESXi I'd recommend a separate SAN.
 
Dear Martian,

If you like Silicon Image SATA controllers for this project,
1. Silicon Image 3112/3114/3512. I believe PCI-based. Usually very affordable.
2. Sil 3124 (Usually found on PCI implementation) or Sil 3132 (PCI-Express). Slightly more expensive than 3112/3114/3512-based cards.

I recommend Sil 3132-based PCI-Express option bcause it has higher performance consistent with more modern expectation.

----------------
As for other items, particular question

How long do you plan to hold on to this hardware setup ?
1. If you want to build one-time and use it for 7 years or more, some users do this way, so you have option to follow the route of proper server build.
2. If you only want to use it for short-to-medium time, say 2 or 3 years, rational compromise is the key
3. If you just want to play with it for learning purpose, then reasonable cost will do. The most critical is RAM capacity. 16GB and RAM will solve a lot of performance demand for you, temporarily, because you can use RAM properly to mask the I/O demand for some cases, but not all cases. Where I/O bottleneck is an issue, luckily now you can get very affordable SSD in the States during promotional times.
 
Last edited:
OK, you guys have made me re-think my strategy. I jumped at the Supermicro board and the Xeon because of the promo code and because I knew it could do everything I wanted and more (even though it is more than I need). I still think it is a good choice but I had forgotten that it doesn't have a PCI slot which could come in handy (needed if I go with a Sil3112). My main reason for wanting the Supermicro + Xeon board was for IPMI and for HT in the Xeon vs. an i5.

I recently started using IPMI (Dell's version) at work and it is wonderful however I didn't realize that later versions of Intel vPro with AMT (version 6+) appear to support basically the same features as IPMI. It also occurred to me that the i7 2600 (non-k) has HT and supports Vt-d.

For the same money I can get:

Intel BOXDQ67SWB3 LGA 1155 Intel Q67 SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard - $125 + $7.56 shipping = $132.56 (saves me 47.44)
Intel Core i7-2600 Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 2000 - $300 (an extra $60 over the Xeon but faster and has turboboost)
G.SKILL Ripjaws X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) Desktop Memory - $110 (saves me $30 and allows for easy upgrade to 32GB!)

This setup gains me a little processor speed, the ability to easily move up to 32GB RAM and I have a PCI slot and it appears that I can still remote manage (although not on a separate port but I don't really care)

I lose ECC (but I don't need it) and I gain a little more power draw but I suppose the extra speed and turboboost perhaps justify it. I doubt the idle power draw is much different either between an i7 and the Xeon and I believe the extra 15 Watts is for the integrated graphic which won't be needed on the motherboard so it may be a wash.

EDIT: I also loose a NIC port but I don't need it at fist and I'm sure I can pick up a dual Intel NIC when I need one.

Does anyone have experience with vPro? Will it let me remote manage via VNC like I think it will? Can anyone think of any good reason not to go with this i7 setup over the Supermicro + Xeon setup I originally posted?

Thanks again for your help!

Martian
 
Last edited:
@O.P.

You said you had pulled the trigger ..so I didn't respond .. but I was going to propose what you have in your last post.

Your new shopping list looks familiar (my current build).

Currently in use:
http://hardforum.com/showpost.php?p=1038069374&postcount=2


Proposed new build:
http://hardforum.com/showpost.php?p=1038451690&postcount=37

I have not bothered with the remote managment feature because I sit in front of my lab
setup all the time.

PS.... there is a thread on [H] that points to an i7-2600 for $200 at Microcenter - B&M pickup only:
http://dealnews.com/Intel-Core-i7-Quad-2600-3.4-GHz-CPU-for-200-at-Micro-Center-stores/553317.html
 
Last edited:
I've been putting together a similar setup to you. Not sure how important IPMI is to you, but what I did was get an HP Proliant ML110 server. It has a Xeon E3-1220 and supports iLO3. It cost $469 at NewEgg but came with no hard drives and 2gb of RAM. Add in hot-swap bays @ $12 each and 16gb of ECC RAM and you come up with a total of $665. I then got an IBM M1015 card for $85 here on [H] so the second controller was there. Hit me up for links if you're interested.
 
Sorry wrong thread.. :) Didn't realize I jacked!
 
Last edited:
I was able to cancel my original order and decided to go with the Intel Q67 based motherboard which supports vPro for remote management. Also thanks to Netwerkz101 I'm going to save ~$80 after tax and shipping on an i7 2600 processor since my brother lives right by a Microcenter.

OP is updated with my new build.

I found a BR10i with the full height bracket on eBay for $42 shipped and got the SFF-8087 cables from monoprice. It's overkill for what I need but I think it will be better / more reliable than a cheap SATA controller.

I'm still open to comments / suggestions - you guys are a ton of help! I'm particularly interested in hearing thoughts on mirroring two datastores on separate drives in software RAID 1 inside the storage VM. It seems like this is possible but also seems like it is probably a bad idea. I also still considering the SYBA SY-PEX40045 SATA II (3.0Gb/s) 1:2 (2x1) Internal SATA II Port Multiplier as another option to mirror my local datastore.

Martian
 
I was able to cancel my original order and decided to go with the Intel Q67 based motherboard which supports vPro for remote management. Also thanks to Netwerkz101 I'm going to save ~$80 after tax and shipping on an i7 2600 processor since my brother lives right by a Microcenter.

OP is updated with my new build.

I found a BR10i with the full height bracket on eBay for $42 shipped and got the SFF-8087 cables from monoprice. It's overkill for what I need but I think it will be better / more reliable than a cheap SATA controller.

I'm still open to comments / suggestions - you guys are a ton of help! I'm particularly interested in hearing thoughts on mirroring two datastores on separate drives in software RAID 1 inside the storage VM. It seems like this is possible but also seems like it is probably a bad idea. I also still considering the SYBA SY-PEX40045 SATA II (3.0Gb/s) 1:2 (2x1) Internal SATA II Port Multiplier as another option to mirror my local datastore.

Martian

There's likely no driver for that Syba card in ESXi so it won't even see the drives you plug into it.

You'll need to get a hardware RAID card that is supported in VMware's HCL or attempt to hack the hypervisor to include the driver (which I would avoid).
 
There's likely no driver for that Syba card in ESXi so it won't even see the drives you plug into it.

You'll need to get a hardware RAID card that is supported in VMware's HCL or attempt to hack the hypervisor to include the driver (which I would avoid).

That is the beauty of a port replicator - there is no OS driver required. It presents as a single disk to the system.

Martian
 
I assume you're planning on plugging that RAID card in the x16 slot, but if you are going to mod the x4 slot, please share experiences with pics! Because I'm kind of thinking of doing that. Why do they ever close that end I'll never know.

I saw something about MythTV and what not; don't expect PCI passthrough to work for a tuner card. Based on people's experiences I'd be surprised if it did.
 
I assume you're planning on plugging that RAID card in the x16 slot, but if you are going to mod the x4 slot, please share experiences with pics! Because I'm kind of thinking of doing that. Why do they ever close that end I'll never know.

I saw something about MythTV and what not; don't expect PCI passthrough to work for a tuner card. Based on people's experiences I'd be surprised if it did.

Yeah, I think the BR10i will need to go in the x16 slot. I would like to have the option of putting it in the x4 slot but I'm not going to take my dremel to a brand new motherboard.

As for MythTV - I use network based HDhomeruns so I don't need to worry about ESXi handling the tuners.
 
I had my BR10i installed internally and the normal PCI bracket installed so it's hidden...worked fine.
 
Hardware is here and I plan to build and test this weekend. I've got my vSphere 5 iso patched with Chilly's 82579LM drivers and am looking forward to diving in.

That said I have a few dilemma's I'm working through and would like some opinions:

1) Should I install ESXi on my 2 GB USB drive as planned or would it make more sense to put it on the 120 GB drive along with my local datastore for the storage VM? I'm leaning toward the USB and really can't think of any reason not too. They are cheap and vSphere is easy to install.

2) Should I go with OpenIndiana or Illumian for the storage VM? I'm leaning heavily toward OI as it seems more mature.

3) What resources should I dedicate to the storage VM? I know I can change this but would like a good starting point.. Originally it will be a two disk 1 TB ZFS mirror. A 2nd two disk 640 GB mirror will be added later with the possibility of a 3rd mirror (probably 2 TB). I plan to enable dedupe and compression because I have the CPU / RAM to handle it and can think of any reason not to. I'm planning to set it up for 2 cores and 8 GB (of 32 total) RAM but love appreciate educated opinions.

4) One of the systems this build is replacing is my file server (samba on 64-bit Linux on a software RAID 1). Should I use a portion of my ZFS Storage VM as my file server or should I dedicate the entire ZFS pool to ESXi datastore and create a separate File Server VM? Is there a way I can do both without having to segregate the space in advance? I don't want to end up cramped for space on my Windows shares with several hundred GB of unused datastore space or vice versa and not be able to quick rearrange my storage.

Martian
 
1.
Does not really matter. If you install Esxi on USB, you may change ESXi versions more easily.

2.
I would use OpenIndiana live.
Reason: vmware tools avalable

3.
Depends on your VM's. I would use 12 GB for Storage VM.
Enabling compress is ok. Activate Dedup only if you expect dedup rates > 20
Your RAM is not really enough for dedup + enough RAM for ARC file caching

4.
You create a datapool, connected via passthrough to your storage vm.
On this pool you can create filesystems/ datasets. You can share them via NFS (ESXi use)
and/or SMB (accessable from Windows). Because of the pool concept, each dataset can use
the whole pool capacity. This is not as with normal partitions, where you must assign disk capacity,

Only reason for two pools is if you want to have one smaller high performance pool for VM's
and a larger storage pool (file services and backup)
 
Thank Gea - That is exactly the kind of information I was looking for!

I'm still on the fence on dedup though. I think I would be in the 10-20% range, maybe more. I do store some backup files and iso's that would dedup nicely. With only one mirror set would it really be that much of a drain on my RAM?
 
I've tried dedup previously for about a year, I'd say unless you really need that space, stay away. It's a performance killer. Take the extra money you'd have to spend giving it sufficient ram (or ssd for l2arc) to hold the ddt, and just buy bigger drives.
 
hi

I just want to thanks you for all the information you give .
I know you probably really busy. I hope you can answers my question.

were you able to enable FT and use vt-d vt-x etc..
I going to build 2 servers , but i really want to have those function .
and is the controller recognize by esx

Thanks
Benoit
 
[hi

I just want to thanks you for all the information you give .
I know you probably really busy. I hope you can answers my question.

were you able to enable FT and use vt-d vt-x etc..
I going to build 2 servers , but i really want to have those function .
and is the controller recognize by esx
Thanks
Benoit
 
vt-d and vt-x work fine. I only have one system so I can't speak about FT but I see no reason why it wouldn't work if properly configured.
 
Got it all put together last weekend and have been slowly testing it and sorting out the bugs. Got hung up with a bad flash on the BR10i that left me with only 2 ports working but re-flashing it (correctly) fixed that.

Here are the goods:

ESXi.jpg


Still in the learning / testing phase however I believe I'm about ready to start moving some physical machines over.

I'm extremely happy with the hardware - especially the motherboard. While not a true "server" class motherboard the vPro AMT remote management with Real VNC Plus works great! I've not had a keyboard, mouse, or monitor connected since I set it up and I've never had a CD / Floppy drive connected at all - Virtual CD rocks!

Granted I'm biased, but I really think I hit the sweet spot of price, performance, and power use with this build. I could have saved some money with AMD but would not have any remote management and the power draw would be considerably higher.

Moving forward:
I picked up an Intel pci-e 1x NIC for $22 on sale at newegg so that will go in the 1x PCI-e slot. I've been debating between a dual or a quad NIC for the 4x slot and have decided a quad would be best. I've also pretty much maxed out my two 8 port unmanaged gigabit switches and am looking at getting a cheap 24-port managed switch.

Does anybody have any recommendations of a good quad NIC that is supported in ESXi 5 and a good cheap 24 port managed switch that supports vLAN? The switch will be in the basement so it doesn't need to be silent but can't sound like a jet either.

I'm looking at the Intel PRO/1000 VT and Dell Powerconnect 2624 and 2724 or the HP Procurve 1800 24G for a little more money. Thoughts? I know the 2724 isn't a true managed switch but it looks like it will meet my needs.

Martian
 
I'm running Intel PRO/1000 VT cards in my lab and they work great. I'm using the HP 1910-24g for a switch and would recommend the 1810-24g as well.
 
I really want a Procurve 1810-24g or 1800-24g but just can bring myself to spend that much money when I don't need any of the features they offer.

I'm now looking at the Dell Powerconnect 5324 and really can't find any reason not to get one. The Powerconnect 2624 and 2724 still appear viable as well.

Anyone have experience with any of these and care to render an opinion?


EDIT: Made an offer on a Powerconnect 5324 and to my surprise it was accepted! Hope I made the right choice. Looking forward to learning how to use a managed switch and being able to wire up all the ports I ran throughout my house and not just the ones I'm using.
 
Last edited:
That is the beauty of a port replicator - there is no OS driver required. It presents as a single disk to the system.

Martian

but how do you tell when a drive fails? There's no speaker and if there is a light to indicate, it's in the back of the sever
 
but how do you tell when a drive fails? There's no speaker and if there is a light to indicate, it's in the back of the sever

Yeah, that is the problem - basically you don't know... which is why I decided not to go that route.
 
I am in a similar dilemma, I really like the option of passing the sata card directly to a guest to handle zfs.

The big benifit I see from having it hosted on the esx compared to a seperate machine via iscsi, are speed (because the vm can access the nfs store at 10Gbps) but also reliability (less point of failure).

My ESX will be on a UPS, but then i would need another one for the ZFS box, what if my switch goes down, or the esxi or the zfs store.

If the all in one works fine with passthrough and it perform better because of 10Gbps network (vmnet), I think ZFS take little cpu but lots of ram, so it seem a no brainer for me right now..

However I do see the point of doing HA and playing with thoses feature, but it's not required in my case.

Any more pro/con of the all in one? (ZFS as a Guest to provide a data store to esx ).

Thanks!
 
I don't really see a downside as you can always move the ZFS array out to a separate physical box at any time without having to reconstruct the array.
 
Hi
Does anyone have REAL working config that support:
Esx with 32Gb RAM , FT VT-D
But when i said working, he personally tried it.

Thanks
 
My build supports 32 GB RAM and VT-D (passed through the BR10i and a couple Intel NICs) - both tested and fully functional. Don't know about FT though.
 
thanks Martian

i know your setup and it great , but i really want to find someone who tested everything.
thanks again.
 
The i7-2600 is not capable of supporting FT officially by VMware.
I have got one too, the most I can get it to do is only FT for 32-bit guests on virtualised ESXi hosts.

SiteSurvey reports a failure for FT support too.

If you intend to have 2 physical whiteboxes to play around with FT, make sure your processor is listed here:

http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1008027

Martian, to use SiteSurvey, you need to connect to a vCenter server instead of a ESX host directly. Once connected, you should see the Site Survey tab on the right pane when any of the ESX host is selected.
 
Back
Top