Yet Another ESXi Build Suggestions Thread

Jesse B

[H]ard|Gawd
Joined
May 30, 2010
Messages
1,631
I'm wanting to build the all-too-popular all-in-one ESXi/NAS rig. I've got a real barebones server I'm running currently, which I'm going to install ESXi on (it's been in another city, arriving tomorrow finally). Specs are in the sig. I plan to give this some serious upgrading in the near future, but just don't have the money for it right now. I'll discuss this later on.

Having never played with ESXi, I'm just not sure how to go about a few aspects here. I wanted to passthrough my two F4's to the NAS OS (haven't decided if I'm gonna go SolarisExpress/OpenIndiana/FreeBSD, or just use something simple like FreeNAS yet) to use as the primary datastore. I was then hoping to be able to just install any further VM's to this pool from there on. Not sure if I access this pool from ESXi via NFS or iSCSI or what.

First off, am I able to passthrough the built-in SATA controller on my motherboard, or do I need to buy a controller to do this? I was just planning on running ESXi off a flash drive for now, and eventually installing it on a 2.5" drive or something when I upgrade. If I do need a controller, what's an inexpensive solution? I can't see myself needing more than 4 disks. Also, for future reference, am I able to passthrough just specific disks, or does it have to be everything connected to the controller?

I'm a tad confused how I'd go about making the pool however. I assume I'd have to install the NAS VM to the same flash drive as ESXi (there is room), and then just install the rest of the VMs from then on to the pool. Obviously this is less than ideal, but it's just a temporary setup, so I'm hoping it'll work.


When I do eventually upgrade my server, I'm wondering what kind of specs I will need. I'm not wanting to spend a boatload of money on this, but I also want to ensure that I have room for expansion should I need it. I plan on running the following VMs (as of now):

- NAS/File Server (As mentioned, SolarisExpress/OpenIndiana/FreeBSD/FreeNAS/etc.)
- LAMP Stack + PHP and Python Frameworks (Probably just Ubuntu Server)
- Media Server (Haven't decided on OS yet, just needs to stream to PS3)
- Windows 7
- Some Linux Desktop

All I really know is that I'll need 8GB of RAM to be able to comfortably run all those VM's. This is fine, I'll probably upgrade to 16GB as money permits. I have no idea what will be required CPU wise though. Will two cores suffice, or do I need four? Will server-grade hardware make a big difference, or am I fine just using consumer-grade? How much of a difference does Hyper-Threading make? What about ECC?


If there's any articles/threads that explain any of my questions, feel free to just post them.

Thanks for reading,


- Jesse
 
You can't pass thru disks, rather the pci controller(s). Whether it works for your mobo, can only really be determined by experimenting. If it doesn't, then shell out for a controller. The term you want to look up is VT-D, to know if the CPU *and* the mobo support it. If not, no pass through for you. I have esxi on a flash drive, using a datastore on a laptop IDE drive, both of which are used by esxi, and I passed thru the cougar point SATA controller on the mobo - openindiana works just fine that way.
 
You can't pass thru disks, rather the pci controller(s). Whether it works for your mobo, can only really be determined by experimenting. If it doesn't, then shell out for a controller. The term you want to look up is VT-D, to know if the CPU *and* the mobo support it. If not, no pass through for you. I have esxi on a flash drive, using a datastore on a laptop IDE drive, both of which are used by esxi, and I passed thru the cougar point SATA controller on the mobo - openindiana works just fine that way.

Thanks for your reply. My CPU says it has "Virtualization Technology Support", which I assume is just the AMD equivalent of VT-d, or just a general term. I'm not sure whether or not the motherboard supports it, but I will definitely look into it. Don't care too much about whether or not I'm able to pass through individual disks, or in this case, not, was just curious :)

I'll look into the method you're using to run ESXi as well.

Thanks,


- Jesse
 
No, intel virt tech is VT, that is whether it supports hypervisors more efficiently, NOT the same as VT-d.
 
Ahh, alright. I'll have to go do some reading on what all those are.

Unfortunately my chipset doesn't support IOMMU. Oh well, guess I'll just have to upgrade.

Thanks,


- Jesse
 
It really kicks ass. My virtualized OI ZFS SAN gets almost native speed.

That sounds fantastic. Not having half a dozen servers running almost sounds equally fantastic (to my wallet at least ;) ).

I think I'm just gonna make a whitebox. I'm having troubles justifying spending the extra few hundred dollars for server-grade equipment. I don't feel I'm asking a lot of out this server, and the hardware I've picked out is all on the Whitebox HCL. Throw an Intel NIC in there, good to go ;)

Pretty much just gotta wait until my current hardware shows up tomorrow so I can at least start fiddling with things.

Also, should I just get a small hard drive to run ESXi off, or will USB/CF suffice? I'm just not sure how much writing it does, and I don't want to have to replace the drive constantly.

Thanks,


- Jesse
 
Eeek. Well, very few mobos I know of support vt-d that are desktop class, and the servers ones are all in the ballpark of $200. About the same amount for a cpu that also supports this. I think you are SOL :(
 
Eeek. Well, very few mobos I know of support vt-d that are desktop class, and the servers ones are all in the ballpark of $200. About the same amount for a cpu that also supports this. I think you are SOL :(

Yeah, I'll just keep looking at my options. If I have to wait, I have to wait. Such is life.
 
A different idea: instead of virtualizing the storage system, have the storage system run VMs. e.g. you could run openindiana GUI for the ZFS storage, then install virtualbox on top of it and run your VMs there...
 
Why that never occurred to me is... beyond me. I guess I was just so infatuated with trying one of these hypervisors out that I overlooked the obvious solution :D

I think I'll do exactly what you recommended. Thank you very much, I'll update this thread tomorrow evening when I have things installed :D
 
Also, I see no reason why not, but am I able to set up a mirrored array, and install OpenIndiana to said mirror?
 
I don't think so. It's easy enough to convert the rpool to a mirror after boot. Adding the second drive is one command, but you need to update the boot blocks on the 2nd drive too. Google should show the way :)
 
Cool, thanks. Never used anything Solaris based, so I've got a bunch of reading ahead of me.
 
Also, consider seriously installing napp-it (from napp-it.org). Nice web-based gui for Opensolaris based ZFS boxes.
 
I'm basically building the same system as the OP, but with the further requirement to run a Call Manager Lab and some GNS3, as well as recording TV and converting it and such.

Most guides advise using an LSI 1068 controller or a LSI 2008 based controller, but those don't appear to be available in germany. What other options do I have? Can I use the SATA ports/controller directly on the motherboard?

What are some reccomended Motherboards/Processor combos, preferably the low-power kind. I'm willing to spend up to 1000 euros, more if needed, but would prefer to spend less if possible...
 
Back
Top