Passthrough Gaming Rig Upgrade

regression

n00b
Joined
Feb 16, 2016
Messages
6
I've lurked hardforum for a decade or more and have always gotten solid advice here so I signed up to ask this. Hopefully VM is the right place for this and not GPU.

I have my ESXi setup on a box with a Xeon 1240v3, 32GB RAM and some local SSDs and a NFS NAS. Had to inject NIC/HD drivers because the cheapo Z87 Extreme4 didn't play nice with ESXi 5.5U2. Nothing Fancy.

To eliminate the computer sprawl and reduce wife-rage I got a refurb 90$ GTX 480, softmodded it into a Quadro 6000 and setup PCI passthrough to a win8.1 x64 VM with an SSD as a SATA datastore presented directly to the VM. This consolidated the *nix servers, sandboxed webbrowser boxes, but most importantly my gaming VM - and this setup has worked great for the last year or so. Windows is good. Steam is good. Most games run reasonably @ 1080 with mid-high settings (I'm not a graphics junkie) and stream to another PCs in the house well post wireless N installation. However after XCOM2's framerate studder at times I suspect the GTX 480 is getting a little old.

I'm not one for overclocking or hardware mods in general. I got the GTX 480 because it was the last softmoddable GeForce (from what I read) before device ID was controlled with resistors and buying $200+ cards to solder worries me.

I'm don't really care about nVidia vs AMD or really ESXi (I was using KVM prior) but I don't want to spend $1k+ on a real modern Quadro just for ESXi passthrough (vDGA) support when a far more modest GPU would more than meet my needs. I haven't considered vSGA due to cost.

My question: Does anyone have any recommendations for stock / non-hardware mod upgrade paths for a GPU in this type of VM setup? Any experiences?

Thanks.
 
I've got a similar setup using an R9 270X. I went AMD just to avoid the whole modding thing and I like esxi because everything can be done with a gui (short of a few custom lines I had to add to my vmx e.g. pci.hole). I'm looking at upgrading too and am keeping an eye on the 390/390X's for cheap or the fury nano now that the price has dropped. just for reference I'm running a VM with 8 cores of an X5687, 12gb ram and that 270X playing 1080p with most settings on high or ultra except things that aren't great on AMD cards.
 
I've got a similar setup using an R9 270X. I went AMD just to avoid the whole modding thing and I like esxi because everything can be done with a gui (short of a few custom lines I had to add to my vmx e.g. pci.hole). I'm looking at upgrading too and am keeping an eye on the 390/390X's for cheap or the fury nano now that the price has dropped. just for reference I'm running a VM with 8 cores of an X5687, 12gb ram and that 270X playing 1080p with most settings on high or ultra except things that aren't great on AMD cards.

I found 12GB to the be sweet spot for my gaming VM as well. :)

How is ESXi compatibility with the R9 2xx / R9 3xx cards? Is it like nVidia where only a select few device IDs seem to work with passthrough?

I haven't seriously considered AMD for a number of years but playing with GPUboss / newegg pricing it seems OK. The fury nano is still ~= $800 (CAD) so out of my league. 390s are still ~= $400 new but the 380's would be a substantial upgrade and could probably be done for $2-300. Maybe look for one on sale / refurb.

XFX Radeon R9 380 R9-380P-4255 4GB 256-Bit GDDR5 PCI Express 3.0 CrossFireX Support Double Dissipation XXX OC Video Card
HardOCP Editors Choce Gold Award Winner

on newegg the above looks promising @ $300 (I'm not posing links as I'm too new).
 
As far as I know there is AMD cards just work. I know my 270X did only thing I had to do was mess with the vmx file and add the line "pciHole.start = xxxx" where xxxx is 2048 I think. Might have to play around with that number depending on how much ram the card has but that was it.
 
Yeah, if Nvidia has locked-out the feature you need, then go AMD. The 380/380X is currently priced very aggressively against the GTX 960.

It's also the bare-minimum I'd buy to upgrade a GTX 480.
 
Has anyone tried the newest nvidia drivers on a newer GTX card?
I saw that "Beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3"
Was added, Would this hint at better GPU pass through for nvidia?
 
Has anyone tried the newest nvidia drivers on a newer GTX card?
I saw that "Beta support on GeForce GTX GPUs for external graphics over Thunderbolt 3"
Was added, Would this hint at better GPU pass through for nvidia?
External graphics is very different from passthrough. External graphics via thunderbolt is effectively just shuffling some PCIe lanes around. So the short answer would be "highly unlikely"
 
I still don't actually understand the mechanism that causes the GRID, Quadro and Tesla (and a rather restricted subset) work and everything else nVidia to fail when AMD cards are unsupported for vDGA and seem to have way better general compatibility.

Is this just blatant device ID sniping by nVidia drivers when it detects vmware or ESXi when it detects unsupported nVidia or the side effect of something else?
 
I still don't actually understand the mechanism that causes the GRID, Quadro and Tesla (and a rather restricted subset) work and everything else nVidia to fail when AMD cards are unsupported for vDGA and seem to have way better general compatibility.

Is this just blatant device ID sniping by nVidia drivers when it detects vmware or ESXi when it detects unsupported nVidia or the side effect of something else?
As far as I understand yes, that's exactly what it is. The driver shuts down the card if it detects that you're running in a vm. Apparently in kvm you can use nvidea cards but you need to add a line to hide the hypervisor from the vm.
 
As far as I understand yes, that's exactly what it is. The driver shuts down the card if it detects that you're running in a vm. Apparently in kvm you can use nvidea cards but you need to add a line to hide the hypervisor from the vm.
Would be nice if you could do the same with ESXi or something. Unraid Also seems to work Nvidia cards in a VM as well.
 
KVM works great with nvidia stuff. I also just setup a pass through rig with Arch as the host.



I've since retired the 680 and picked up a 980ti
 
KVM works great with nvidia stuff. I also just setup a pass through rig with Arch as the host.

I've since retired the 680 and picked up a 980ti

Neat, nice setup you've got there. Is that a playstation monitor on the left?
 
Long time reader, first time poster.

Just wanted to contribute after many years of reading, and more recently a lot of searching the 'net for experiences with GPU passthrough on ESXi 6.0 and an ATI R9 Fury. I couldn't find much out there, there's information on 280/290/380/390's but nothing on the R9 Fury series that i could find.

It was a $700CDN video card leap of faith but i'm happy to report that I successfully got GPU passthrough working on ESXi 6.0, passing through to a Win10 VM. The card i used was the Sapphire R9 Fury (not the OC'ed version). No .VMX edits required or anything, but i will admit that it took me three days to figure out that things were working. In the end i believe I was fighting driver versions unnecessarily, as i've managed to get things working with both Crimson 15.12 and 16.3.1 drivers.

I did endure a lot of VM crashing during driver installs etc. Also my Sapphire R9 Fury, from time to time during troubleshooting, went to ludicrous speed with its fans, requiring a hard reboot of the physical machine. But at the end of the day, i got it working and now everything behaves just fine.

Funny enough, i had given up on getting it working after troubleshooting until 4am one day, and actually decided i may have to return the R9 Fury. I reboot my physical host, and just went to bed. Next morning, wake up, turn on the monitors, and notice the monitor connected via DisplayPort to the physical host is .... sitting at the Win10 login screen!

I log in, run 3dMark, and sure enough i'm getting ~12200 points on FireStrike w/ a Xeon E5-1650v3 on a Supermicro X10SRH. Success! It was the hard reboot that fixed things. And yes, i was fully aware you have to reboot the physical host after setting up GPU passthrough in ESXi. In fact i probably reboot the physical host about 100 times during my troubleshooting.

One peculiarity: I manage the VM's with vSphere Client from a separate physical computer. When i power on the Win10 VM that i've passed the GPU through to, the VM powers up, and i get a black screen in vSphere Client. That was really throwing me off during my troubleshooting as i thought things were just getting hung up, but it turns out the black screen is just a "secondary screen", with the "primary screen" being the displayport-connected monitor. Even stranger, i have to click on the vSphere Client black screen to give the VM focus, then move my cursor from the right half of the black screen over to about the middle mark of the black screen and that makes the mouse pointer suddenly jump over to the "primary screen" - on the displayport-connected monitor. From there i can mouse around on the primary screen and do whatever i please.

Very odd, but it works, and i'm happy.

I may eventually record my successes on video and chuck it onto youtube if i'm brave enough to try to start from scratch and document how i got things to work. For now i'm just enjoying the fact that it works and thought i'd let others know.

Lastly, today i also installed an Inateck USB3.0 PCIe card and passed that through to Win10 as well. Seamless. Now for some attempts at gaming.

--BroccLee
 
Long time reader, first time poster.

Just wanted to contribute after many years of reading, and more recently a lot of searching the 'net for experiences with GPU passthrough on ESXi 6.0 and an ATI R9 Fury. I couldn't find much out there, there's information on 280/290/380/390's but nothing on the R9 Fury series that i could find.

It was a $700CDN video card leap of faith but i'm happy to report that I successfully got GPU passthrough working on ESXi 6.0, passing through to a Win10 VM. The card i used was the Sapphire R9 Fury (not the OC'ed version). No .VMX edits required or anything, but i will admit that it took me three days to figure out that things were working. In the end i believe I was fighting driver versions unnecessarily, as i've managed to get things working with both Crimson 15.12 and 16.3.1 drivers.

I did endure a lot of VM crashing during driver installs etc. Also my Sapphire R9 Fury, from time to time during troubleshooting, went to ludicrous speed with its fans, requiring a hard reboot of the physical machine. But at the end of the day, i got it working and now everything behaves just fine.

Funny enough, i had given up on getting it working after troubleshooting until 4am one day, and actually decided i may have to return the R9 Fury. I reboot my physical host, and just went to bed. Next morning, wake up, turn on the monitors, and notice the monitor connected via DisplayPort to the physical host is .... sitting at the Win10 login screen!

I log in, run 3dMark, and sure enough i'm getting ~12200 points on FireStrike w/ a Xeon E5-1650v3 on a Supermicro X10SRH. Success! It was the hard reboot that fixed things. And yes, i was fully aware you have to reboot the physical host after setting up GPU passthrough in ESXi. In fact i probably reboot the physical host about 100 times during my troubleshooting.

One peculiarity: I manage the VM's with vSphere Client from a separate physical computer. When i power on the Win10 VM that i've passed the GPU through to, the VM powers up, and i get a black screen in vSphere Client. That was really throwing me off during my troubleshooting as i thought things were just getting hung up, but it turns out the black screen is just a "secondary screen", with the "primary screen" being the displayport-connected monitor. Even stranger, i have to click on the vSphere Client black screen to give the VM focus, then move my cursor from the right half of the black screen over to about the middle mark of the black screen and that makes the mouse pointer suddenly jump over to the "primary screen" - on the displayport-connected monitor. From there i can mouse around on the primary screen and do whatever i please.

Very odd, but it works, and i'm happy.

I may eventually record my successes on video and chuck it onto youtube if i'm brave enough to try to start from scratch and document how i got things to work. For now i'm just enjoying the fact that it works and thought i'd let others know.

Lastly, today i also installed an Inateck USB3.0 PCIe card and passed that through to Win10 as well. Seamless. Now for some attempts at gaming.

--BroccLee

Do you know how you got it working or did it really just magically start working that one night after you went to bed?
 
Do you know how you got it working or did it really just magically start working that one night after you went to bed?
Haha. Very good, and valid, question. I have asked myself that question a few times, which is part of why i'm not quite yet prepared to try to rebuild from scratch yet, without enjoying things first.

Part of the issue in my troubleshooting is that the displayport-connected monitor is a little hard to see (due to its location) from my vSphere Client workstation, so i was paying more attention, out of convenience, to my vSphere client's darn black screen. Troubleshoot, reboot VM, rebuild VM with BIOS vs. UEFI, reboot, try, crash, safe mode, uninstall drivers or disable R9 Fury, reboot, reinstall different drivers, crash, tweak .VMX file, try, crash, restore snapshot etc etc etc. The list goes on and on (like i said, 3 days of troubleshooting).

At the end of the day, I don't have a perfect step by step process but i have a feeling that the following will work:

1) Install ESXi 6.0
2) Enable GPU Passthrough in ESXi. ESXi will force you to pass through BOTH the R9 Fury and the audio that is built into the GPU.
3) Reboot the physical host to complete enabling GPU Passthrough.
4) Install Windows (10, in my case)
5) Install VMWare Tools (not sure this is required but the stuttery mouse in vSphere client drives me insane so i always do this), reboot VM etc etc.
6) Add the GPU as a PCI device in the Win10 VM settings. Boot up, Win10 will show exclamation mark next to the newly detected GPU.
7) Somehow get the ATI Crimson drivers installed (as i said, i tried Crimson 15.12 and Crimson 16.3.1).. Some tips:
a) Installing the Crimson drivers package (complete with all of ATI's software) when "not in safe mode" will invariably result in your VM crashing
b) Try installing the drivers package in safe mode. Try both the drivers package with all of ATI's software, but also try the trimmed-down driver-only software package that you can download on ATI's website.
c) One way or another, get those drivers installed without crashing.
d) Try booting up your machine, if you get a black screen, don't panic like i did, wait for a bit, and once you're confident the VM has fully started, try clicking the black screen a few times and keep an eye on your displayport-connected monitor. It might just turn on. Move your mouse around to see if you can get the cursor over to your primary screen.
e) if d) above doesn't seem to be working, REBOOT YOUR PHYSICAL MACHINE. Then try d) again.​
7.9) Try going to bed, you had a hard day, you've earned it.

Misc. Notes:
-----------------
8) One nifty feature of the Sapphire R9 Fury is that it has a series of blue LED lights. One LED light turns on when the GPU is "activated" (a.k.a. when my VM boots up - but NOT when my ESXi physical host boots up). It would have been nice to know that's what that meant when i started all this troubleshooting. At first i thought only having one light turned on might have meant there was a problem, but it turns out that getting 1 blue LED to light up was a good sign that the GPU was "enabled" in the VM.
9) The rest of the blue LED lights turn on and off as GPU load goes up and down..sort of like a utilization meter.
10) Sometimes during my troubleshooting, ALL of the blue lights on the Sapphire R9 Fury would turn on and stay on. A few seconds later the 3 GPU fans would turn on full speed and i had to hold my tower down for fear it might fly away (not to mention i couldn't hear myself think when those fans were going). The only way to get the LED's to turn off is to kill power to the physical machine. I consider this to be a bug/flaky issue with GPU passthrough on ESXi.
11) I do not believe UEFI vs. BIOS for the VM itself matter but for what it's worth the VM i'm using successfully at the moment is in UEFI mode.
12) I did not have to disable any onboard video in the physical machine BIOS (Supermicro X10SRH has built in video for its integrated IPMI/Lights-Out-Management feature)..but you do of course have to enable VT-d etc..
13) I did dedicated 32GB of RAM to my Win10VM. I do not know if this is required but it sure was fun since this is the first server I've built with 128GB of RAM.
14) I did NOT have to disable the VMWare SVGA 3D Adapter. In other words, i have things working with both display adapters enabled in Device Manager.
15) Once your VM is boot up and the "blue light" is on, you can NOT reboot the Windows 10 VM. It'll lock up and your 3 GPU fans will go bezerk. The only way to reboot your VM is to reboot the physical machine. This may be a problem for some.

Hope this helps. If i think of any other tips i'll post again.

I also hope someone else can get this going as well so that i don't have to try again from scratch :)

--BroccLee
 
Last edited:
Haha. Very good, and valid, question. I have asked myself that question a few times, which is part of why i'm not quite yet prepared to try to rebuild from scratch yet, without enjoying things first.

Part of the issue in my troubleshooting is that the displayport-connected monitor is a little hard to see (due to its location) from my vSphere Client workstation, so i was paying more attention, out of convenience, to my vSphere client's darn black screen. Troubleshoot, reboot VM, rebuild VM with BIOS vs. UEFI, reboot, try, crash, safe mode, uninstall drivers or disable R9 Fury, reboot, reinstall different drivers, crash, tweak .VMX file, try, crash, restore snapshot etc etc etc. The list goes on and on (like i said, 3 days of troubleshooting).

At the end of the day, I don't have a perfect step by step process but i have a feeling that the following will work:

1) Install ESXi 6.0
2) Enable GPU Passthrough in ESXi. ESXi will force you to pass through BOTH the R9 Fury and the audio that is built into the GPU.
3) Reboot the physical host to complete enabling GPU Passthrough.
4) Install Windows (10, in my case)
5) Install VMWare Tools (not sure this is required but the stuttery mouse in vSphere client drives me insane so i always do this), reboot VM etc etc.
6) Add the GPU as a PCI device in the Win10 VM settings. Boot up, Win10 will show exclamation mark next to the newly detected GPU.
7) Somehow get the ATI Crimson drivers installed (as i said, i tried Crimson 15.12 and Crimson 16.3.1).. Some tips:
a) Installing the Crimson drivers package (complete with all of ATI's software) when "not in safe mode" will invariably result in your VM crashing
b) Try installing the drivers package in safe mode. Try both the drivers package with all of ATI's software, but also try the trimmed-down driver-only software package that you can download on ATI's website.
c) One way or another, get those drivers installed without crashing.
d) Try booting up your machine, if you get a black screen, don't panic like i did, wait for a bit, and once you're confident the VM has fully started, try clicking the black screen a few times and keep an eye on your displayport-connected monitor. It might just turn on. Move your mouse around to see if you can get the cursor over to your primary screen.
e) if d) above doesn't seem to be working, REBOOT YOUR PHYSICAL MACHINE. Then try d) again.​
7.9) Try going to bed, you had a hard day, you've earned it.

Misc. Notes:
-----------------
8) One nifty feature of the Sapphire R9 Fury is that it has a series of blue LED lights. One LED light turns on when the GPU is "activated" (a.k.a. when my VM boots up - but NOT when my ESXi physical host boots up). It would have been nice to know that's what that meant when i started all this troubleshooting. At first i thought only having one light turned on might have meant there was a problem, but it turns out that getting 1 blue LED to light up was a good sign that the GPU was "enabled" in the VM.
9) The rest of the blue LED lights turn on and off as GPU load goes up and down..sort of like a utilization meter.
10) Sometimes during my troubleshooting, ALL of the blue lights on the Sapphire R9 Fury would turn on and stay on. A few seconds later the 3 GPU fans would turn on full speed and i had to hold my tower down for fear it might fly away (not to mention i couldn't hear myself think when those fans were going). The only way to get the LED's to turn off is to kill power to the physical machine. I consider this to be a bug/flaky issue with GPU passthrough on ESXi.
11) I do not believe UEFI vs. BIOS for the VM itself matter but for what it's worth the VM i'm using successfully at the moment is in UEFI mode.
12) I did not have to disable any onboard video in the physical machine BIOS (Supermicro X10SRH has built in video for its integrated IPMI/Lights-Out-Management feature)..but you do of course have to enable VT-d etc..
13) I did dedicated 32GB of RAM to my Win10VM. I do not know if this is required but it sure was fun since this is the first server I've built with 128GB of RAM.
14) I did NOT have to disable the VMWare SVGA 3D Adapter. In other words, i have things working with both display adapters enabled in Device Manager.
15) Once your VM is boot up and the "blue light" is on, you can NOT reboot the Windows 10 VM. It'll lock up and your 3 GPU fans will go bezerk. The only way to reboot your VM is to reboot the physical machine. This may be a problem for some.

Hope this helps. If i think of any other tips i'll post again.

I also hope someone else can get this going as well so that i don't have to try again from scratch :)

--BroccLee

Haha thanks, sorry if I came off as sarcastic, that was not my intention :)

I'm trying passthrough with several Radeon GPUs right now, but I keep blue screening whenever the monitor is plugged in. I haven't tried installing the drivers in safe mode though, I'll try that next
 
Haha thanks, sorry if I came off as sarcastic, that was not my intention :)

I'm trying passthrough with several Radeon GPUs right now, but I keep blue screening whenever the monitor is plugged in. I haven't tried installing the drivers in safe mode though, I'll try that next
No problem at all. Here's another thing you can try..

Boot into safe mode, and in Windows Device Manager, DISABLE your Radeon GPU (you know, right click, Disable)
Reboot out of safe mode
Get back into Windows... your Radeon GPU will still be disabled here of course, so your VM should boot up using the VMWare SVGA 3D driver.
With your GPU disabled, install Radeon Drivers (again, perhaps try installing JUST the drivers instead of all of ATI's extra software that's packaged into their Crimson drivers)
Enable your GPU again
Reboot your physical machine, fire up the VM, be patient and keep an eye on the monitor that's plugged into the GPU.

Give that a shot and see if it helps... I may try to rebuild a new Win10VM this weekend to see if i can figure out a step by step procedure.

--BroccLee
 
No problem at all. Here's another thing you can try..

Boot into safe mode, and in Windows Device Manager, DISABLE your Radeon GPU (you know, right click, Disable)
Reboot out of safe mode
Get back into Windows... your Radeon GPU will still be disabled here of course, so your VM should boot up using the VMWare SVGA 3D driver.
With your GPU disabled, install Radeon Drivers (again, perhaps try installing JUST the drivers instead of all of ATI's extra software that's packaged into their Crimson drivers)
Enable your GPU again
Reboot your physical machine, fire up the VM, be patient and keep an eye on the monitor that's plugged into the GPU.

Give that a shot and see if it helps... I may try to rebuild a new Win10VM this weekend to see if i can figure out a step by step procedure.

--BroccLee

I got my new hardware in, and it simply worked by installing the Crimson driver (not even in safe mode). R7 240 and 7870 working fine in Windows 10. However, I also ran into the issue where the host would reboot (not even a PSOD) when the VM was shut down. It *seems* to be resolved for me by removing the AMD Audio device passthrough to the VM (not passing it through in the first place). I don't use GPU audio anyway so it's not an issue for me.
 
I got my new hardware in, and it simply worked by installing the Crimson driver (not even in safe mode). R7 240 and 7870 working fine in Windows 10. However, I also ran into the issue where the host would reboot (not even a PSOD) when the VM was shut down. It *seems* to be resolved for me by removing the AMD Audio device passthrough to the VM (not passing it through in the first place). I don't use GPU audio anyway so it's not an issue for me.
Glad to hear you got it working. It sounds like it was much simpler for you, and in general other literature i've seen online seems to indicate that this is a little easier to get going with other GPU's from AMD.

I decided to spend some time writing a step by step procedure to get this going. I put it in another thread to make it easier to find, in case anyone is looking for it.

ESXi 6.0 and AMD R9 Fury Working Procedure

--BroccLee
 
Back
Top