Your home ESX server lab hardware specs?

In a cluster why would you want to pass it through on the host anyway? Defeats the whole purpose of VM cluster if the vm can't move between hosts. We use network USB hubs in both the VMware and Hyper-V environment

Client Hyper-V is lacking that I will agree with but in the data center they trade blows. They both have pros and cons and that's why we have both.

I gave a presentation at a tech conference on Hyper-V vs VMware without the FUD and with an emphasis on technological and administrative differences. I'll have to share it here and we'll get a good discussion going.
 
I gave a presentation at a tech conference on Hyper-V vs VMware without the FUD and with an emphasis on technological and administrative differences. I'll have to share it here and we'll get a good discussion going.

I would be interested in that.
 
In a cluster why would you want to pass it through on the host anyway? Defeats the whole purpose of VM cluster if the vm can't move between hosts. We use network USB hubs in both the VMware and Hyper-V environment

Client Hyper-V is lacking that I will agree with but in the data center they trade blows. They both have pros and cons and that's why we have both.

Fair enough.

For my uses, virtualization is all about consolidation, I've never had any need, or desire to cluster/move between hosts.

So, I consolidate and Direct I/O forward hardware to high heavens, as I'll never ahve to worry about migration.

I was thinking of USB for the purposes of - as an example - small to medium business server room UPS connectivity. There are a bunch of rack mount UPS:es that have USB as their own communication method.
 
I gave a presentation at a tech conference on Hyper-V vs VMware without the FUD and with an emphasis on technological and administrative differences. I'll have to share it here and we'll get a good discussion going.

I'd be interested to hear your perspective.

Before you start, any disclosures you need to make?

(for instance, work for an IT consulting firm primarily implementing one brand over the other, etc.)

That being said, maybe that discussion belongs in a thread of its own, rather than in here.
 
Zarathustra[H];1041283290 said:
Fair enough.

For my uses, virtualization is all about consolidation, I've never had any need, or desire to cluster/move between hosts.

So, I consolidate and Direct I/O forward hardware to high heavens, as I'll never ahve to worry about migration.

I was thinking of USB for the purposes of - as an example - small to medium business server room UPS connectivity. There are a bunch of rack mount UPS:es that have USB as their own communication method.

You have the windows host os to initiate the shut down from the APC with hyper-v. I tend to use hyper-v with small clients especially if they are all windows. Backups are so much easier.
 
You have the windows host os to initiate the shut down from the APC with hyper-v. I tend to use hyper-v with small clients especially if they are all windows. Backups are so much easier.

Yeah, this is one area where it would be nice if ESXi would improve.

Including a UPS based shutdown in the host OS that does not depend on a guest, would make life so much easier.

Granted, it is less of a problem for non-free users as they have access to VMa, but for free users setting this up with a guest with forwarded connectivity to UPS (I use serial) that then uses SSH to the host to initiate shutdown, is both complicated and forces you to leave SSH open on the host...

Overall I am very happy with ESXi though. When I first started setting things up, the alternatives were still not quite up to speed. If I were starting from scratch today, not sure what I'd use.

Probably Xen, or ESXi, as I still have a bit of a bias against anything Microsoft (or anything with a GUI at all) running on a server.
 
Zarathustra[H];1041283392 said:
Yeah, this is one area where it would be nice if ESXi would improve.

Including a UPS based shutdown in the host OS that does not depend on a guest, would make life so much easier.

Granted, it is less of a problem for non-free users as they have access to VMa, but for free users setting this up with a guest with forwarded connectivity to UPS (I use serial) that then uses SSH to the host to initiate shutdown, is both complicated and forces you to leave SSH open on the host...

Overall I am very happy with ESXi though. When I first started setting things up, the alternatives were still not quite up to speed. If I were starting from scratch today, not sure what I'd use.

Probably Xen, or ESXi, as I still have a bit of a bias against anything Microsoft (or anything with a GUI at all) running on a server.


My hyper-v hosts don't have gui's :p Very soon my windows boxes won't either unless an application requires it.

But its mostly about what works the best for you. All the hyper-visors have pros and cons. Its just a matter of balancing those for your needs.
 
Zarathustra[H];1041283177 said:
USB difficulty could be an issue, but remember, the target audiences for these VM Hosts are servers, sitting on racks in a server room, so it's really not fair to bash them for such things as lack of audio passthrough.

I consider anything desktop friendly to be a bonus, more than I do an expectation.

Of course different users/use, different requirements.
I still think that Hyper-V is very microsoft centered, versus ESXi generally centered(multi OS)
 
Zarathustra[H];1041283097 said:
That makes a lot of sense actually.

If virtualization were around back in my college days when I had limited space/budget, this is definitely something I would have done.

How does one manage ESXi in this case though? Using the ESXi app from the guest which has the video card forwarded to it? What if there is a problem? One must have a secondary way of reaching the ESXi server in that case. Can't rely on the guest.

You can use the client from a guest VM (the one where you have configured the video output from the GPU), or an other computer.
If you have vsphere, then everything is via a browser.
I agree with you that you need a secondary way to reach the host, especially when you have issues with a guest or the host crashes.
 
In a cluster why would you want to pass it through on the host anyway? Defeats the whole purpose of VM cluster if the vm can't move between hosts. We use network USB hubs in both the VMware and Hyper-V environment

Client Hyper-V is lacking that I will agree with but in the data center they trade blows. They both have pros and cons and that's why we have both.

USB over IP huh...

I passthough USB to my desktop VM at home.
How responsive are these networked hubs, good enough for keyboard mouse? Good enough for gaming?
 
It is not based on what i saw on several forums.
In my case, the GPU is used to do a small gaming for my wife and for my friends when they come to the house. I have a windows 7 VMs to which I passthrough the GPU, this enables the user to have a pretty nice gaming rig. Most of the time, the ESXI server is running other stuff.

On my reseach on the internet, people do that for gaming rigs, and for plex media server or other.

Wow, is this actually working reliably in esxi 5.5 now? Last I heard it was hit and miss.
Is it now possible to pass through a mid range amd video card and get full sound/graphics via hdmi? I'd currently got my htpc sitting literally directly above my esxi server....
 
Wow, is this actually working reliably in esxi 5.5 now? Last I heard it was hit and miss.
Is it now possible to pass through a mid range amd video card and get full sound/graphics via hdmi? I'd currently got my htpc sitting literally directly above my esxi server....

I don't see why it wouldn't work, but keep in mind, for full reliable passthrough, you need a CPU, Motherboard and BIOS combination that supports VT-d (on Intel) or IOMMU (on AMD).

While I have heard that some hardware doesn't like VT-d/IOMMU, I have yet to come across anything that failed to work properly, but I haven't tried a GPU yet.

Depending on what hardware you use, this may or may not be easy to accomplish.

Any recent Xeon or Opteron system should do it. With consumer hardware it can be trickier.

All AMD FX chips support it, and most 990FX motherboards and BIOS:es, but they don't necessarily document this, so it can be a matter of buying a board and testing.

I don't know if this is enabled in any FM2 chips.

On the Intel side, any non K CPU has VT-d support built in, but it can be VERY difficult to find motherboard bios support.

If you log in with the client to your ESXi server, look on the host summary page under General, look at the last line: "DirectPath I/P:". If it says "supported", you should be good. Otherwise it will likely be a fail. (See image below)

15291470712_95f030cd32_o.jpg



There IS a software GPU passthrough that doesn't rely on directpath I/O, but it is generally terrible for anything but basic desktop use.


Also, when you enable DirectPath IO for a particular PCIe device, the host no longer sees the device (except in the Advanced passthrough settings). It will be owned solely by the guest. The guest will also pre-allocate all RAM assigned to it, and won't share it with other guests, so you may need to add more ram, depending on your host setup.

Once Directpath I/O is enabled for a guest, you also lose any Vmotion on that guest.

Also, when passing through, I am not sure if the GPU and audio hardware on the video card show up as the same device, or separate devices. You MAY have to pass through both.
 
Zarathustra[H];1041294465 said:
I don't see why it wouldn't work, but keep in mind, for full reliable passthrough, you need a CPU, Motherboard and BIOS combination that supports VT-d (on Intel) or IOMMU (on AMD).

While I have heard that some hardware doesn't like VT-d/IOMMU, I have yet to come across anything that failed to work properly, but I haven't tried a GPU yet.

Depending on what hardware you use, this may or may not be easy to accomplish.

Any recent Xeon or Opteron system should do it. With consumer hardware it can be trickier.

All AMD FX chips support it, and most 990FX motherboards and BIOS:es, but they don't necessarily document this, so it can be a matter of buying a board and testing.

I don't know if this is enabled in any FM2 chips.

On the Intel side, any non K CPU has VT-d support built in, but it can be VERY difficult to find motherboard bios support.

If you log in with the client to your ESXi server, look on the host summary page under General, look at the last line: "DirectPath I/P:". If it says "supported", you should be good. Otherwise it will likely be a fail. (See image below)

15291470712_95f030cd32_o.jpg



There IS a software GPU passthrough that doesn't rely on directpath I/O, but it is generally terrible for anything but basic desktop use.


Also, when you enable DirectPath IO for a particular PCIe device, the host no longer sees the device (except in the Advanced passthrough settings). It will be owned solely by the guest. The guest will also pre-allocate all RAM assigned to it, and won't share it with other guests, so you may need to add more ram, depending on your host setup.

Once Directpath I/O is enabled for a guest, you also lose any Vmotion on that guest.

Also, when passing through, I am not sure if the GPU and audio hardware on the video card show up as the same device, or separate devices. You MAY have to pass through both.

I have full server grade hardware... I just didn't think it was supported at this time.
 
I have full server grade hardware... I just didn't think it was supported at this time.

Fair enough.

I have not yet tested with a GPU, but Direct I/O forwarding has worked flawlessly with everything I have tossed at it, including storage controllers, Ethernet adapters, etc. etc.
 
Wow, is this actually working reliably in esxi 5.5 now? Last I heard it was hit and miss.
Is it now possible to pass through a mid range amd video card and get full sound/graphics via hdmi? I'd currently got my htpc sitting literally directly above my esxi server....

I would stay hit and miss personally.
You need to be very careful what model you actually buy.
I personally can pass an XFX 7850 the gpu and the audio with a windows x64 VM with BIOS (It crashes if with EFI).
some insight in that article with new AMD graphic cards : http://www.pugetsystems.com/labs/articles/Multi-headed-VMWare-Gaming-Setup-564/
 
Zarathustra[H];1041294465 said:
I don't see why it wouldn't work, but keep in mind, for full reliable passthrough, you need a CPU, Motherboard and BIOS combination that supports VT-d (on Intel) or IOMMU (on AMD).

While I have heard that some hardware doesn't like VT-d/IOMMU, I have yet to come across anything that failed to work properly, but I haven't tried a GPU yet.

Depending on what hardware you use, this may or may not be easy to accomplish.

Any recent Xeon or Opteron system should do it. With consumer hardware it can be trickier.

All AMD FX chips support it, and most 990FX motherboards and BIOS:es, but they don't necessarily document this, so it can be a matter of buying a board and testing.

I don't know if this is enabled in any FM2 chips.

On the Intel side, any non K CPU has VT-d support built in, but it can be VERY difficult to find motherboard bios support.

If you log in with the client to your ESXi server, look on the host summary page under General, look at the last line: "DirectPath I/P:". If it says "supported", you should be good. Otherwise it will likely be a fail. (See image below)

15291470712_95f030cd32_o.jpg



There IS a software GPU passthrough that doesn't rely on directpath I/O, but it is generally terrible for anything but basic desktop use.


Also, when you enable DirectPath IO for a particular PCIe device, the host no longer sees the device (except in the Advanced passthrough settings). It will be owned solely by the guest. The guest will also pre-allocate all RAM assigned to it, and won't share it with other guests, so you may need to add more ram, depending on your host setup.

Once Directpath I/O is enabled for a guest, you also lose any Vmotion on that guest.

Also, when passing through, I am not sure if the GPU and audio hardware on the video card show up as the same device, or separate devices. You MAY have to pass through both.

Everything you said here is correct.
 
Zarathustra[H];1041296913 said:
Fair enough.

I have not yet tested with a GPU, but Direct I/O forwarding has worked flawlessly with everything I have tossed at it, including storage controllers, Ethernet adapters, etc. etc.
The vmware forums are full of examples of stuff that worked and didnt work, my advice, do your research and an extensive one.
 
The vmware forums are full of examples of stuff that worked and didnt work, my advice, do your research and an extensive one.

Fair enough. Maybe I have just been lucky.

In the case of Konowl though, with both the boxes right next to each other, it would seem free and easy to test with only enough downtime to power everything down, grab a screwdriver, etc.
 
Humble home lab

Intel Xeon E3-1230V2
Supermicro X9SCM-F-O
32GB (8GB x 4) KVR1333D3E9SK2
Kingston 16GB USB stick (ESXi boot)
OCZ Vertex 2 60GB SSD
150GB WD Raptor (2x)

Guests: Firewall/router, UTM, web server, mail server, sandboxes (of web & mail servers), Windows 95, 98, 2000 Pro, XP Pro, 7 (older Windows mostly for IE (5, 6) web dev testing).

JGJZu4v.png
 
ASRock EP2C602wwd-4L/D16 Motherboard
2 x Intel Xeon E5-2620V2 Processors (12C/24T)
16 x 16GB DDR3 1600 MHz Memory (256GB)
Norco 4224 Rackmount Case
1 x 500GB Samsung 840 SSD
LSI Logic 9265-8i RAID Controller
2 x 500GB Samsung 840 EVO SSDs in RAID 0
2 x 500GB Samsung 850 EVO SSDs in RAID 0
4 x 256GB Samsung 840 Pro SSDs in RAID 0
Areca 1880i RAID Controller
8 x 2TB Hitachi 7200RPM HDDs in RAID 6

Guests are all about the Oracle Hyperion EPM stack and a few odds and ends here and there. Somewhere around 15 VM's running right now with another 15 or so to be added soon.

2015-01-30_00-59-11.png
 
Because the VD that the Areca will be presenting won't have the SSD flags that vsphere picks up on set.
 
Last thing on this: Does not having the flags affect performance?
 
The SSD's are actually all attached to the 9265. Is there a way to get them to show up as SSD in RAID? Does it actually matter?
 
My Server Specs:

HP Proliant DL180 G6
40GB RAM, 2x Xeon E5520, HP Smart Array P410 Raid Card w/512MB RAM, 2x HP NC362i NICs
2x 300GB SAS 15k Raid mirror for OS (Win Server 2012R2), 2x 300GB SAS 15k Raid Mirror for VM (Hyper-V), 6x 2TB SAS in Raid 5 for fileserver

My only problem is I could never find a video driver for this thing. Other than that it runs great!

Any suggestions on upgrades or configurations would be grand.
 
Just picked up my new Host last night.

HP DL380P G8

2x Xeon 2640v2's
128gb Ram (8x 16gb)
p420 Raid Card
2x 136GB 15K drives
8GB SD card for ESXI


Going to be using it for bit of everything. Looking to do my VCP and have a good play with HyperV and citrix in nested environments.

Iknow some people have managed to get passthrough working with Grafics cards but im wanted to play with installing my R9 290 and see if i can virtualise my gaming machine. Then sell my main rig/old ESXI host which was a white box 4770 with 32gb ram.

I have been woundering how i power the card. Or would i need to bodge a separate PSU for powering the card?
 
My Server Specs:

HP Proliant DL180 G6
40GB RAM, 2x Xeon E5520, HP Smart Array P410 Raid Card w/512MB RAM, 2x HP NC362i NICs
2x 300GB SAS 15k Raid mirror for OS (Win Server 2012R2), 2x 300GB SAS 15k Raid Mirror for VM (Hyper-V), 6x 2TB SAS in Raid 5 for fileserver

My only problem is I could never find a video driver for this thing. Other than that it runs great!

Any suggestions on upgrades or configurations would be grand.

I have a DL180 G6 in my basement I'm not using.

These things are great until you decide you want to use an expansion card it didn't originally come with (for me it was the SAS controller, I wanted one with a JBOD/IT mode so I could use ZFS properly).

As soon as the server firmware doesn't recognize a PCIe card, the 8 fans shift up to full speed, and when they do, it is so loud you need hearing protection working in the same room. (there is no way in bios/IPMI or any setting to override this)

I could hear mine from my bedroom two floors up, with all doors in between closed.

If you are not noise conscious - however - I have one of the ultra rare 3 PCIe risers for this server, I could be persuaded to part with :p
 
I have a riser already with it. will get some pictures up soon!

Im not overly fussed if this works or not. just find it very cool if it did!

This is going into a full sized rack (which im going to sound deaden then inside a cupboard. So noise should be able to be controlled.
 
I have a riser already with it. will get some pictures up soon!

Yep, you probably have the relatively common one with one 16x half height slot for the SAS controller on the right (if looking from the front of the server) and two 8x full height slots on the left.

The one I was talking about is very very similar, except it has one 16x on right, and three 8x on left.

It took me forever to find mine. Bought it because the setup I was building required a minimum of four slots.

I did not realize that as soon as I used any of them for a non approved card (which essentially is only HP's smartraid controllers :p ), it would get as loud as it did.

I spent about 2 months trying to mod it and make it quieter before giving up, pulling the CPU's RAM and drives from it, and buying a Supermicro motherboard and starting over in a 4U Norco case.

Im not overly fussed if this works or not. just find it very cool if it did!

This is going into a full sized rack (which im going to sound deaden then inside a cupboard. So noise should be able to be controlled.

Well you are lucky then! :) The server is an absolutely fantastic deal as long as you either stick with only the expansion cards it shipped with, or have a very high noise tolerance.

I would still recommend sticking a non-approved PCIe card (any card really, could be a video card, or a low end USB controller) in one of the slots, and powering it up to see if you think you can deal with the noise.

Those 8x 14krpm jet turbines are high static pressure 80mm fans that consume 18W each (!?) at full tilt and can wake the dead, especially with 8 of them crammed into that 1u chassis that acts like a resonator.

They make the old 80mm deltas everyone considered loud seem like toys :p

There is a reason they call it the HP DL180G6 Dreamliner :p
 
I made sure mine came with the riser card. Paid about $350 for it minus the hdds.
 
Yeah so.. turns out the riser card is what you were talking about! Also how would i power the card?
 
Yeah so.. turns out the riser card is what you were talking about! Also how would i power the card?

Well, I was talking about cards that draw all their power from the PCIe slot, but if you need extra power there is a way to do it.

In configurations that have the extra two drive bays in the rear instead of the riser, three cables run back to them from the backplane in the front, a single molex connector for power, and two sata cables.

I forget if the sata cable is hard wired, or if you plug something into the back plane, but I can look at mine when I get home and let you know.

Then from there you can use that single molex (with adapters) to power things if you need to. Not sure how much current it can take, but...
 
Zarathustra[H];1041406375 said:
Well, I was talking about cards that draw all their power from the PCIe slot, but if you need extra power there is a way to do it.

In configurations that have the extra two drive bays in the rear instead of the riser, three cables run back to them from the backplane in the front, a single molex connector for power, and two sata cables.

I forget if the sata cable is hard wired, or if you plug something into the back plane, but I can look at mine when I get home and let you know.

Then from there you can use that single molex (with adapters) to power things if you need to. Not sure how much current it can take, but...

So it looks like it is part of the harness. It is attached on the front right where the power cable plugs into the backplane, then wraps around the bottom in front of the fans all the way to the left and then back to where the drive cage is installed in the back (if equipped). Not sure if the wire shipped in servers that didn't come with the drive cage, but as I understand most of them did come with the drive cage and those with risers were typically installed later.

I can take pics if you'd like.

Look up front between the fans and the backplane, maybe it is fucked in there somewhere.
 
If only there were a way to make it quieter....

I tried modding it for about two months getting fan header connectors and wiring my own PWM system, and still couldn't find a good balance between noise and heat on these things.

It seems because of the cramped 1U form factor, and the design with the backplane restricting airflow upfront, they need those monster fans to go full bore, and when they do, the sheet metal acts as a resonator and amplifies everything.

Great solution for a datacenter where noise isn't an issue, due to the cost effectiveness. The second hand home basement market is not where these things shine though.

I picked up a Supermicro X8DTE new on eBay for $150, and got a discounted Norco 4216 on Amazon, and harvested the parts from the DL180G6 (CPU's, RAM, Drives) and used a decent PSU I had laying around, and started over. Wasn't worth the aggravation anymore for me.
 
So now i know my 290 will 100% not work anyone got an ideal replacement?

I can then sell my old workstation/esxi host which is a 4770, 32gb ram. Be awesome if i could use some of the power of this server as i can create a full virtualized domain test lab and still give myself 32gb of ram.
 
Back
Top