[deleted]

Status
Not open for further replies.
My educated guess is that ESXi will work on all of them (I've installed ESXi on most of the models listed there, however the issue of different hardware configurations and revisions comes into play). Standard procedure before moving the boxes to the new OS (ie. while Windows is still installed) should be to ensure that the motherboard and controller cards (driver, ethernet etc.) have fully updated firmware versions.

We've done something similar at work and my "biggest" piece of advice would be to budget in VirtualCenter. While ESXi can be administrated by the standard Infrastructure Client, it's a PITA to log into each individually to change something, maintain a local security database and is just not nice. You'll thank yourself later when you expand the number of ESX boxes in future (trust me, you probably will).
 
While unsupported by VMware directly those boxes should work as other people have done it with that Hardware. See here

You need to read the notes for each one, there may be things that don’t work, or you may need to do a little editing of files. Luckily my old EVGA board worked with both the SATA controller, and Ethernet.

My suggestion would to make sure the VM’s are using shared ISCSI storage (VMFS), connect it to all of those machines and use the 2 boxes that are on the HCL as the primary servers. If something go’s bad, just fire the machines that were on the box that is having issues up on one of the others. This takes seconds if the storage is shared. I would also use them for Piloting / testing/ and temporary locations while hardware / VMware OS upgrades are happening.

Also just a hint with VMFS. Format with larger block size. Default will only allow 256MB virtual disks. I would do this with both the shared storage and any local, just in case. So for the local you would need to delete your VMFS file system and recreate (easy process. and for the shared ISCSI you just choose a larger block size when you initially format on the first machine that connects to the ISCSI disk.
 
While unsupported by VMware directly those boxes should work as other people have done it with that Hardware. See here

You need to read the notes for each one, there may be things that don’t work, or you may need to do a little editing of files. Luckily my old EVGA board worked with both the SATA controller, and Ethernet.

My suggestion would to make sure the VM’s are using shared ISCSI storage (VMFS), connect it to all of those machines and use the 2 boxes that are on the HCL as the primary servers. If something go’s bad, just fire the machines that were on the box that is having issues up on one of the others. This takes seconds if the storage is shared. I would also use them for Piloting / testing/ and temporary locations while hardware / VMware OS upgrades are happening.

Also just a hint with VMFS. Format with larger block size. Default will only allow 200MB virtual disks. I would do this with both the shared storage and any local, just in case. So for the local you would need to delete your VMFS file system and recreate (easy process. and for the shared ISCSI you just choose a larger block size when you initially format on the first machine that connects to the ISCSI disk.

256gb default actually. 1mb block = 256gb.

Most of those are on the community supported list, IIRC. They'll probably work.
 
[H]exx;1033882667 said:
Is there any software out there that would allow me to install Windows or Linux OS on the non-supported servers and run ESXi on top of that?

No. ESXi is a bare-metal hypervisor only.

Your systems should probably work though, barring controller issues.

Use OpenFiler for iSCSI if you want shared storage, or FreeNAS and NFS.
 
I don't think the 1750's will work as their CPU doesn't support virtualization which is imho essential to make it all work. I can definitely say that ESXi wouldn't install on the 1750 I tried it on.
 
I don't think the 1750's will work as their CPU doesn't support virtualization which is imho essential to make it all work. I can definitely say that ESXi wouldn't install on the 1750 I tried it on.

VT is not required for ESXi. It improves performance, but it's not required. There would have been a different reason - what error did it give you?

that is, if you want to run 32bit guests. 64 bit ones will require VT :)
 
I can answer for the PE 2850. I had one at my desk (until last week when I had to give it to another dept) running ESXi 3.5 U2 for months without issue. It was for my testing though and wasn't used heavily, so results will vary. Especially compared to the 2950/1950, as those are the first Dell rack servers to run Dual Dual/Quad Core Xeon processors, and run SAS drives. The PE2900/1900 are the desktop ones. Those desk/rack servers fall into the 9th gen classification of Dell servers introducing those CPUs and HDDs.

As for your question about running ESXi on top of Windows: No. VMware Server would serve that purpose though. Just load VMware Server on your Windows/Linux server, and you can create/manage virtual machines/networks through that server's GUI.
 
The issue with the 2850 is that the CPU's are most likely HT, not multi core. This can cause some performance issues.

I'd use the 1750s for services that ESX/ESXi relies on like NTP and DNS.
 
VT is not required for ESXi. It improves performance, but it's not required. There would have been a different reason - what error did it give you?
No idea, it was a stock 1750, and it purple-screened on install. Those were old boxes on the way out so I didn't fiddle with them much longer. I tried it on two machines just to make sure that it wasn't busted hardware and the result was the same on both. I gave them away so I can't test it anymore, will see whether I can get a hold of the guy I gave them to.
 
No idea, it was a stock 1750, and it purple-screened on install. Those were old boxes on the way out so I didn't fiddle with them much longer. I tried it on two machines just to make sure that it wasn't busted hardware and the result was the same on both. I gave them away so I can't test it anymore, will see whether I can get a hold of the guy I gave them to.

yeah, I'd be curious to know, since ~most~ of the hardware on those should work :)
 
The issue with the 2850 is that the CPU's are most likely HT, not multi core. This can cause some performance issues.

I'd use the 1750s for services that ESX/ESXi relies on like NTP and DNS.

You are correct on the 2850. They are Single Core Xeons with Hyper Threading in that system.
 
I would venture to guess those all should work too.

I have put ESXi on quite a number of pieces of hardware that 'isnt supported'.

The biggest help to me when running into issues has been this little website for the tip on the SATA/IDE booting:
http://www.squishnet.com/?p=21

And, buy some good recommended intel NICs
 
Thanks. Corrected. I was feeling way to lazy to look it up at the time, but I should have guessed 256. :)

256gb default actually. 1mb block = 256gb.

Most of those are on the community supported list, IIRC. They'll probably work.
 
Status
Not open for further replies.
Back
Top