CCIE Voice Lab

Joined
Sep 8, 2007
Messages
697
Hi All,

I've got a few questions. I'm planning to setup a home CCIE voice lab along with a few other servers for personal use. (5 total, CCM, Unity, LAMP, Dynamips host, Win2k)

I could do this in a traditional VM setup, but I was wondering if using ESXi would be more efficient/effective? Would it run faster/better?

The LAMP server will be my linux learning machine, as well as hosting a personal website. Win2K will be running IP Phones and allowing me to put the printer outside the bedroom (not linux compatible). The rest is going to be a playpen for me.

Before any license blahblah comes up, my boss invests in training materials (especially for CCIE), so they will be legally provided He also might chip in if I share with colleagues. :)

I'm wondering if there are any reccomended hardware lists (quadcore, 8GB RAM (perhaps more), 500-1TB storage) out there, or at least hardware its best to avoid?

Are there any specific motherboards with especially good track records?

Furthermore, what kind of quirks can I expect while running an ESXi server? are there any things that won't work, that would in a traditional vmware workstation setup?

BTW... the LAMP and WIN2K hosts should be running 24/7.

Guess thats a start for now, has anyone else done such a thing? or can reccomend a good get-started guide? (for ESXi or a ESX CCIE VOice lab?)
 
Hardware to run it on - are you planning on building a whitebox (sounds like this might be the way you're leaning), or buying a new/used OEM machine? Need more information on this

ESXI versus Workstation - You won't get USB port passthrough to VMs on ESXi, as you would with workstation. Because in Workstation, the underlying host OS would be able to recognize the USB device attached, you can pass it through (most of the time) to the VM. In ESXi, that is your host OS, and it's HCL is very limited. For this reason, USB passthrough isn't allowed.

ESXi is very easy to configure, just be aware that it has no service console (like ESX does), and therefore, any 3rd party applications that specifically list ESX functionality likely will not work, as they are tapping into ESX through the service console (which ESXi does not have). Initial thoughts on products that use this fucntionality would be backup sotware.

If this is going to just be a test environment, I would recommend using an NFS target for the VMs, so that you can easily read/backup the VMs on the network with simple file copies from the NAS. If you plan to run the VMs on the local, disk, inside of the ESXi host, good luck on backing those up, as the VM files have to live on a VMFS3 formatted partition, and to date, I know of nothing that can easily read VMFS3 (at all).

Make sure you run ESXi 3.5 Update 4, as it adds a lot fo SAS/SATA and VCB compaitiblity fixes over the previous versions, especially if you're planning on building a whitebox for the project. Many motherboards SATA ports are not "supported" storage topologies for ESXi, and will report as such during the installation process with a message similar to "no supported disk devices."

I've had very good luck in using nforce 3600 based motherboards, Intel 975 and higher based motherboards, for whitebox configurations. Optionally, you could pick up any whitebox, and a Dell Perc/5i w/BBU and cache module, and put that on any motherboard, for only $150 or so.
 
Yes, I am planning to use a whitebox setup for this.

Does ESX have proper USB support? My boss already voiced interest in supporting this project, so that MIGHT not be completely outrageous of a concept?

That said, in the end USB isn't a critical setup.

The tip about the NAS is also very important, It changes the concept, but in a way, I like it this way better. It provides a bit more flexibility and offloads some of the server load...

Especially since part of the concept might be to allow colleagues access to the setup. Each would have their own CCM/Unity/W2K VMs and load them in and out as they wished (scheduled times), starting with a clean setup could be a set of originals stored seperately.
 
USB support under ESXi is supported on the host only, and only with storage devices. THere's no pass through to the VMs, currently.

The NAS makes it easy, you can use any device that support NFS v3 or higher, and that usually will be a very cost effective solution. The best solution would be a SAN, but you're talking about a dev environment, and to that end, I would imagine cost being a factor. A basic SAN (new) will run you at least $10k, without disks, or licenses.

NAS based VM backups are incredibly easy to facilitate, and software can do it, even simple stuff like .tar, or .zip files, containering the entire VMs configuration files, for easy transport. The files then could be transported on a USB HDD, and imported into a Workstation/player environment for further configuration, and deployment finalization/portability.
 
Another alternative for shared storage would be to use Openfiler to present local storage as iSCSI targets.
 
What is your budget?

A nice/ideal test enviroment would be:

Openfiler NAS running on whaitebox hardware with a good RAID controller
Tier one server hardware on VMware's HCL for ESXi (something like a dell poweredge 1950 III)
 
Back
Top