My new ESX Server - Waiting

marley1

Supreme [H]ardness
Joined
Jul 18, 2000
Messages
5,447
PowerEdge R900, 2x Quad Core E7330 Xeon, 2.4GHz, 6M Cache 80W, 1066Mhz FSB (223-4226)

32GB Memory, 8X4GB, 667MHz (311-7853)

2 300GB 15K RPM Serial-Attach SCSI 3Gbps 3.5-in HotPlug HardDrive (341-4424)
3 450GB 15K RPM Serial-Attach SCSI 3Gbps 3.5-in HotPlug HardDrive (341-7199)

Intel Dual Port Nic

Got it free from Dell.

Going to be loading ESX Free on it for some testing.

I may virtualize the entire office, any thoughts? Was just using htis for a test server. But maybe create VLAN in it, 1 for my office network (sbs 2003 server), then 1 for client machines (has a file server and then client machines to be repaired), and then 1 for testing.

Any thoughts on that? Is it possible?
 
You can get those from Vendors to demo (I think they called it Try and Buy when I worked at Dell) to make sure they will work in your environment. It depends on the account/org size and amount of purchase I imagine. Or maybe it was comped with a large order or something.

We're in the process of virtualizing most of our datacenter with server specs like that (32gb ram, dual quad core xeons), but using SAN storage instead of local storage. I don't see an issue with the way you are wanting to configure it. I think we're going to be putting 8-10 production vm's per blade with similar specs.
 
Dell gives out demo machines to reseller ever quarter, we did a lot of sales so sent in request.

As of now we are just using it for our test box, but im not sure if we should virtualize our office.

I think i would need another network card right? Would have 3 vlan

office1 - sbs server and our workstations
office2 - client server and client workstations to be fixed - office 1 can see office 2
test1 - test machines/servers
 
For that, yeah more NICs would be warranted. 1 or 2 more since the 2 onboard NICs will cover ESX management and VM traffic. Plus, you could test with iSCSI if you add more NICs to the server.

At first though, you could try to route VM traffic through one of the onboard NICs and see if the speeds are fast enough for you. Then, add as you need and shift VMs/Traffic as needed. But 1 of the NICs will have to be utilized for management of ESX via the VI Client or VirtualCenter.
 
I would personally recommend a minimum of 4 NIC ports, all GbE. My personal preference is for 6x GbE ports/ESX host (what's listed below, using 1x NIC for management network tools, 1 for Production VM NIC teaming, and one for iSCSI traffic, or dedicated to a particularly heavy VM):

1x for VMotion
1x for VM "production data" network
1x for Service console for ESX host (VCServer management)
1x NIC for VM "production data" load balancing NIC or for VM "management" network usage (to get to management ports on network devices, from a VM with all of the management software on it).

I am getting ready to do an ESX Enterprise buildout for a client on Dell 2900iii's as well. Decent specs in those boxes, too bad they're Dell's (they already have them).
 
i was told origianally that 2 nics would be fine for a test setup.

not the case?
 
Here's a question I forgot to ask. Do you plan on having the virtual machines talking to each other only in the virtual environment, or do you want them to have connectivity to the rest of your office network? If all communication will be internal, then the 2 nics will be fine. Hell, internal communication would only require the 1 nic for virtual machine management. Now, if you were to have these systems communicate with the rest of the network outside of the ESXi environment, try using the other NIC for VM network traffic to see if the speeds are bearable. If not, add more NICs as needed to either team the NICs, or have different virtual machines communicate on different physical NICs to ease the traffic constraints on the NICs.

To add, as sabregen said, more NICs in the system can also expand it's capabilities.
- I wouldn't worry about VMotion since you only have one server and no shared storage (iSCSI or Fibre). VMotion is mainly if you want to have 2 or more servers clustered so if one physical server goes down, the VMs would be moved over to another server until the down server came back up. This would require shared strorage, either iSCSI or Fibre (not sure if a SCSI JBOD with cluster capabilities would do this, so please let me know if i'm right or wrong), so the physical servers could all see the VMs and have the capability of grabbing the VMs when connectivity is lost. The caveat to this is that the ESXi license will eliminate this functionality. If you don't add the license, you will have full ESX (not i) capabilities for like 90-180 days.
- Also, an additional NIC could be used for attaching iSCSI storage. This can be done for free by loading FreeNAS or OpenFiler on a desktop, setting up it's storage for iSCSI storage, and assigning it to the iSCSI NIC on the ESXi box. For just testing and playing around, the desktop could communicate the iSCSI traffic and network traffic through the same nic. But, if you decide to put this in production, it's best to add a 2nd NIC to your iSCSI box and dedicate that NIC for iSCSI traffic only, as network traffic can cause issues with iSCSI when they are on the same network. iSCSI likes to be separate on it's own fabric like fibre. ESXi should be able to give you this functionality all the time without a trial period.

My apologies for not asking this earlier. I hope we didn't get too far off from what you're trying to get set up. Once you start learning VMware ESX/ESXi, it's hard to stop playing with since the possibilities are endless.
 
To add, as sabregen said, more NICs in the system can also expand it's capabilities.
- I wouldn't worry about VMotion since you only have one server and no shared storage (iSCSI or Fibre). VMotion is mainly if you want to have 2 or more servers clustered so if one physical server goes down, the VMs would be moved over to another server until the down server came back up.

Slight correction on this - VMotion is ONLY if you have two or more hosts, a VCServer implementation (can do a trial for 60days), and shared storage. Also, you must have the VMotion / Storage VMotion license for either ESXi or ESX. However, you left out the possibility of using NAS (as a supported storage medium) running NFS v3 or higher. You wouldn't be able to boot the hosts from NAS (duh), but you can boot the VMs from it.

This would require shared strorage, either iSCSI or Fibre (not sure if a SCSI JBOD with cluster capabilities would do this, so please let me know if i'm right or wrong), so the physical servers could all see the VMs and have the capability of grabbing the VMs when connectivity is lost.

Unless it's a NAS...then it's a SAN. On a SAN, the file system to support VM storage is VMFS3. VMFS3 is a journaling, clustered file system. No other file system can be used for VM storage on SAN. On NAS, because the device already has it's own file system, ESX has to use that, instead.

The caveat to this is that the ESXi license will eliminate this functionality. If you don't add the license, you will have full ESX (not i) capabilities for like 90-180 days.

There's no license for the base edition of ESXi. The trial period is 60days on all products. ESXi is meant truly for single host implemenations, as it does not include a VCServer license. You can use ESXi indefinately, without need of a license, but it doesn't have all of the "cool" features that VMWare is known to carry with it: DRS, HA, SRM, VCB, VCServer, etc.

- Also, an additional NIC could be used for attaching iSCSI storage. This can be done for free by loading FreeNAS or OpenFiler on a desktop, setting up it's storage for iSCSI storage, and assigning it to the iSCSI NIC on the ESXi box. For just testing and playing around, the desktop could communicate the iSCSI traffic and network traffic through the same nic. But, if you decide to put this in production, it's best to add a 2nd NIC to your iSCSI box and dedicate that NIC for iSCSI traffic only, as network traffic can cause issues with iSCSI when they are on the same network. iSCSI likes to be separate on it's own fabric like fibre.

I would recommend either obtaining a switch that can do VLANs, and segregating out the storage traffic from the rest of the network traffic, or using a separate physical switch. Please don't make the common mistake of the typical small IT department: "Hey, it's ethernet, we can just plug it into our existing switches!" I would strongly recommend against it.

ESXi should be able to give you this functionality all the time without a trial period.

Correct. ESX and ESXi both include software iSCSI initiators that can be used over any of the supported GbE or faster NICs.
 
There's no license for the base edition of ESXi. The trial period is 60days on all products. ESXi is meant truly for single host implemenations, as it does not include a VCServer license. You can use ESXi indefinately, without need of a license, but it doesn't have all of the "cool" features that VMWare is known to carry with it: DRS, HA, SRM, VCB, VCServer, etc.

Just want to correct this, there is a license for ESXi, it's just it's free and you have to register to get it from VMware.
 
Actually, that's not correct. VMWare only gives licenses for Enterprise/Datacenter products. If you have a license for ESXi, then it is not free, and you are licensing the features which enable multiple host interactions (ESXi can be licensed for HA, DRS, VMotion / Storage VMotion, etc). The free "license" you are referring to is a serial number, in VMWare's own terminology. It may seem like nit picking, but it is important to know the difference for the VCP and VCDX tests.

Serial Numbers - provided for Free Desktop Virtualization solutions, Paid Desktop virtualization solutions, and ESXi. Also offered for trial (60day) access on other products.

Licenses - Used for Datacenter/Enterprise products. Licenses are used either in a per host basis, or in a consolidated license file, running on a license server. ESX or ESXi implementations where multiple host interactions are required, use a license file.
 
Well, it's not a lic file, but you still need to get a serial from them. Not like you can install it and let it go indefinitely without ever registering the product.
 
Yes, you do need the serial to download it with ESXi. With Workstation, you have to have the serial to install (as with other desktop virtualization products). I was merely trying to make sure that people knew that VMWare makes a distinction between licenses and serials.
 
Yes, you do need the serial to download it with ESXi. With Workstation, you have to have the serial to install (as with other desktop virtualization products). I was merely trying to make sure that people knew that VMWare makes a distinction between licenses and serials.

Yup, and I just wanted to make sure that everyone knew they still needed to register with vmware to continue to use ESXi.
 
Okay originally my idea was to use this server just to test operating systems. I have 5 static ip so i was going to setup a pfsense box and run the esx server off that. I was hoping to have multiple virtual networks that I could mess with ( say virtual1 - sbs, 2 vista 2 xp machines / virtual2 - sbs08 some workstations). I wanted those to run all the time so i can just log in through the web and show clients different setups/etc.

Now I was thinking since this server costed 8g maybe use it for the entire office. Maybe have my office SBS 2003 server migrated onto this and have it connected to my physical 4 workstations. I also have my my dmz network with a Server 2003 box that hands out DHCP for my 192.168.2.x network which is client machines. Was figuring throw that server as a virtual machine. Then have a 3rd network just for testing. Is that all possible?

The main thing which i dont understand fully is all the networking. (haven't had much experience with VLAN).

If i have 1 router that plugs into a switch (gateway 192.168.1.1) and then run a network cable to a physical nic on the test server.

I then create i think its vswitch1, vswitch2, and vswitch3. all them will i guess bridge with physical nic 1. that gets them all online right?

but then if i make each of them 192.168.1.x network and 192.168.2.x and 192.168.3.x network, how exactly do i even up ports to them.

if i log into my 192.168.1.1 router and port foward, can i tell it to open ports to 192.168.2.10 or will it just give me error? Do i need to do a different ip scheme?
 
Here's a question I forgot to ask. Do you plan on having the virtual machines talking to each other only in the virtual environment, or do you want them to have connectivity to the rest of your office network? If all communication will be internal, then the 2 nics will be fine. Hell, internal communication would only require the 1 nic for virtual machine management. Now, if you were to have these systems communicate with the rest of the network outside of the ESXi environment, try using the other NIC for VM network traffic to see if the speeds are bearable. If not, add more NICs as needed to either team the NICs, or have different virtual machines communicate on different physical NICs to ease the traffic constraints on the NICs.

To add, as sabregen said, more NICs in the system can also expand it's capabilities.
- I wouldn't worry about VMotion since you only have one server and no shared storage (iSCSI or Fibre). VMotion is mainly if you want to have 2 or more servers clustered so if one physical server goes down, the VMs would be moved over to another server until the down server came back up. This would require shared strorage, either iSCSI or Fibre (not sure if a SCSI JBOD with cluster capabilities would do this, so please let me know if i'm right or wrong), so the physical servers could all see the VMs and have the capability of grabbing the VMs when connectivity is lost. The caveat to this is that the ESXi license will eliminate this functionality. If you don't add the license, you will have full ESX (not i) capabilities for like 90-180 days.
- Also, an additional NIC could be used for attaching iSCSI storage. This can be done for free by loading FreeNAS or OpenFiler on a desktop, setting up it's storage for iSCSI storage, and assigning it to the iSCSI NIC on the ESXi box. For just testing and playing around, the desktop could communicate the iSCSI traffic and network traffic through the same nic. But, if you decide to put this in production, it's best to add a 2nd NIC to your iSCSI box and dedicate that NIC for iSCSI traffic only, as network traffic can cause issues with iSCSI when they are on the same network. iSCSI likes to be separate on it's own fabric like fibre. ESXi should be able to give you this functionality all the time without a trial period.

My apologies for not asking this earlier. I hope we didn't get too far off from what you're trying to get set up. Once you start learning VMware ESX/ESXi, it's hard to stop playing with since the possibilities are endless.

FreeNAS (and any *BSD) form of iSCSI target is not supported for a reason - they support a corrupted VPD page 83 and will NOT work reliably for more than 1 host. You have been warned. Enjoy your corrupted VMs. They SCREAM for NFS though, especially for a free solution
Vice Versa- OpenFiler supports page 83 correctly, but is slow as snot at NFS. Enjoy the mix-and-match.
 
Unless it's a NAS...then it's a SAN. On a SAN, the file system to support VM storage is VMFS3. VMFS3 is a journaling, clustered file system. No other file system can be used for VM storage on SAN. On NAS, because the device already has it's own file system, ESX has to use that, instead.
Back-To-Front Journaling, Clustered with distributed locking based on vmkernel ID, new-era filesystem that supports extents, for VMFS3, to be precise :D And for whoever might have the delusion, as I've run into this a few times - it is NOT EXT3. It is not related to EXT3. It has nothing to ~do~ with EXT3. :)

NFS filesystems work for anything - best seem to be XFS, EXT3, or WAFL though (WAFL is NetApp's filesystem).
Correct. ESX and ESXi both include software iSCSI initiators that can be used over any of the supported GbE or faster NICs.

Fair warning - the broadcom cards that claim to support hardware iSCSI are NOT hwiscsi hbas. They work great for software though. :)

Oh, and separate physical switch. A separate vlan only separates broadcast traffic - you've still only got a gig of bandwidth there, and that'll saturate FAST on iSCSI/VMotion.
 
Back
Top