ESXi 6.0 and AMD R9 Fury Working Procedure

brocclee

n00b
Joined
Mar 31, 2016
Messages
7
A few days I posted that I had got the AMD R9 Fury working with GPU Passthrough in ESXi in this thread:

Passthrough Gaming Rig Upgrade

At the time i didn't quite know a step by step procedure of how I got things working (other than "i went to bed"), I just wanted people to know that it was possible since I hadn't seen any literature online saying that the R9 Fury would work, but lots of similar literature about the AMD Radeon 280/290/380/390 series etc.


I'd also like to note that it seems that this "procedure" is a very dependent on what GPU you're using. I see lots of other procedures out there, some are much simpler than this. It seems that the procedure is generally easier with older AMD GPU's....?

With a more structured troubleshooting approach, I spent the last few days documenting a step by step procedure on how I got this working. Turns out it was still difficult to recreate, even with a VM that I knew worked, and it sure took a long time to determine what steps were really required and which were optional, but here it is:


Motherboard: Supermicro X10SRH-CLN4F
CPU: Xeon E5-1650v3
RAM: 128GB DDR4 PC4-1700 ECC REGISTERED DIMM MEM-DR432L-SL01-ER21
Video Card: Sapphire R9 Fury Tri-X 4GB HBM PCI-E (not the OC’ed version)
OS: ESXi 6.0.0


  1. Install ESXi 6.0

  2. Directly attach monitor to GPU (using DisplayPort in my case)

  3. Boot up ESXi

  4. Add R9 Fury GPU adapter to DirectPath I/O Configuration screen in vSphere Client. Note that you will be forced to add both the video and audio adapters.

  5. Reboot ESXi

  6. Add new Win10 64-bit VM.
    1. (Optional) Dedicate 32GB RAM to VM (optional - doesn’t seem to matter if RAM is dedicated or not)

    2. (Optional) EFI Boot Mode (optional - can be BIOS as well, doesn’t seem to matter)
  7. Install Windows 10

  8. Install VMWare Tools

  9. (Optional) Do all Windows Critical Updates (optional - can also do later but rebooting VM is a pain since VM doesn’t seem to release the GPU on VM reboot, causing VM to freeze before login screen)

  10. Shut down VM

  11. (Optional) Take VM snapshot (to make troubleshooting easier if required)

  12. Boot Win10 VM up

  13. Either
    1. Disable windows driver search (Right click This PC, Properties, Advanced System Settings, Hardware tab, Device Installation Settings, No, Never Install, Uncheck “Automatically get device app info…”

      OR


    2. Install Display Driver Uninstaller v15.3.1, run it, allow it to reboot to safe mode. Once in safe mode, do a AMD Driver uninstall by clicking “Clean and Restart”. DDU disables automatic updates of windows device drivers, effectively doing the same thing as item a)
  14. Shut down Win10 VM

  15. Add R9 Fury video adapter as a PCI Device to VM in VM Settings

  16. Add R9 Fury audio adapter as a PCI Device to VM in VM Settings

  17. Start up Win10 VM

  18. Once Win10 boots up, device manager should still show a yellow exclamation mark on a generic Microsoft Basic Display Adapter (in other words, should not say AMD Radeon quite yet)

  19. Wait ~2 minutes or so to make sure Windows isn’t attempting to update drivers in background

  20. Install AMD Drivers. You can install all of AMD’s bloatware as well, no need to exclude anything). My Sapphire GPU has a blue LED that turns on when it is active - that turns on here, and DP-connected monitor turns on
    Driver versions Crimson 15.12, 16.3.1, and 16.3.2 all tested with my procedure here, all seem to work.

  21. Once driver installation completes, shut down Win10 VM

  22. Reboot ESXi physical host (this will free up the GPU since the blue light turns on during driver installation effectively consuming the GPU resources)

  23. Start up Win10 VM using vSphere
    1. Open VM Console window in vSphere, click to give focus.

    2. This vSphere console window is the “secondary” monitor, and your directly-connected monitor is set up as the “primary monitor”. Move mouse around in vSphere console and see if you can get the mouse over to the primary monitor. Cursor should weirdly disappear and then appear on primary monitor when crossing over from right half of console window to left hand side of console window

    3. Click to enter login/password

    4. Log in and have fun!
  24. Make a mental note that you can’t reboot the VM (only). If you have to reboot the VM, you have to reboot the entire ESXi physical machine.

  25. If you want, re-enable automatic update of Windows device drivers (it got disabled in step 13 above)

Hopefully this helps someone out there. If you have any questions let me know. Thanks.

--BroccLee
 
Last edited:
  • Like
Reactions: Zuul
like this
Regarding #13 & 19, wouldn't it be easier to disconnect the NIC in the VM's settings?
To be honest, I'm not sure if that would work or not but it could be worth a shot. I simply discovered through many iterations (thank goodness for Restore Snapshot!) that running DDU before installing the AMD drivers was an important and necessary step to prevent the VM from locking up during AMD driver install. After pondering why that may be, i narrowed it down to the fact that DDU disables automatic driver updates, so i figured why not just try disabling automatic driver updates myself instead of relying on DDU to do it. And sure enough, it worked. Pulling the network cable could also work, or it might not... i didn't try that specifically as I've already sunk about 5 days of testing into this :)

--BroccLee
 
Question, have you been able to try an AMD new 500 series in ESXi pass through or know if anyone that has it working?

I sold my main gaming rig and was thinking grab a 570 as I believe it was you who noted AMD fixed the issues that caused ESXi to hang complete when passed through on AMD R9 300 series cards...
 
Okay, so i went and bought a Radeon RX 550 for now as i don't want to wait weeks to use my rig while RX 570 trickle into retail and heck, maybe this card will suffice for my gaming needs.

I got pass through working but had the issues of the Guest VM crashing trying to do the driver installs as was noted above, then this morning i came to work on it powered it up (the entire server) and pass through appeared to be working. (now to get my USB pass through to work) Thing was I had rebooted the host several times yesterday but with no go.

However upon rebooting the Win10 VM, when the system finally rebooted - BAM! ESXi crashed and hard booted :(. I hope a fix for this comes out one day!

So now I am
- creating a fresh new VM to pass through the Vid card and USB and use your specific guide above
- Already updated ESXi to Aptil 18th 2017 releass 6.5.0.d (https://kb.vmware.com/selfservice/s...ype=kc&docTypeID=DT_KB_1_1&externalId=2143832)

And cross my fingers!
 
Last edited:
21. Once driver installation completes, shut down Win10 VM

Shutting down via windows = hard crash and reboot of ESXi server for me FYI.

Then upon ESXi rebooting, when i try to power on the VM = ESXi hard crash and reboot again
 
Shutting down via windows = hard crash and reboot of ESXi server for me FYI.

Then upon ESXi rebooting, when i try to power on the VM = ESXi hard crash and reboot again

MrGuvernment, Did you ever get this working? Im trying to do the same thing.
 
I gave up and ran hyper-V on the rig via WIndows 10. I know they did a big update recently on ESXi 6.5 but none of the notes seem to mention anything about gpu pass through issues but i was thinking of trying again.
 
I gave up and ran hyper-V on the rig via WIndows 10. I know they did a big update recently on ESXi 6.5 but none of the notes seem to mention anything about gpu pass through issues but i was thinking of trying again.

Darn, thanks for the reply though sir.
 
Back
Top