AMD Ryzen with VMWare ESXi: A Pink Screen of Death

Zarathustra[H]

Extremely [H]
Joined
Oct 29, 2000
Messages
38,743
Here's one for those of you who hang out in our Virtualized Computing subforum. With all of those cores for not very much money, many of us have been secretly thinking about whether or not Ryzen would make for a good ESXi server. The answer right now is, probably not. At least not in 6.5, until VMWare adds some compatibility updates to ESXi.

Personally I switched my server over to Proxmox about a year ago, and have been very happy with it, but it doesn't have a new enough kernel for Ryzen either, at least not in the stable repository. For now a Linux system running on kernel 4.10 or later, and KVM might be the best way to use Ryzen as a virtualization server.

"The good news is that AMD-V extensions are present on the AMD Ryzen CPUs. Furthermore, virtualization does work using KVM using the newer Linux kernels. We even have AMD Ryzen systems in our Docker Swarm cluster. You can see our CentOS and Ubuntu guides for getting Kernel 4.10.1 installed with Ryzen.

VMware is not known for leading hardware support so this result should be little surprise to most in the industry. We suspect that as the AMD Naples platform is released we will see VMware support Zen virtualization."
 
Would KVM be a viable option for gaming on a Windows VM? Going 100% Linux on my gaming desktop would not work for me. I mean my Linux HTPC kind of requires a Windows instance for the heavy weight / not ported titles which I've been streaming with great success.

My goal is to still have a VM Windows and somehow use GPU pass through while running Linux as a host. Not interested in a VM server, meaning ESXi wouldn't do me any good.

Very much interested in an R7 1700 build.
 
Would KVM be a viable option for gaming on a Windows VM? Going 100% Linux on my gaming desktop would not work for me. I mean my Linux HTPC kind of requires a Windows instance for the heavy weight / not ported titles which I've been streaming with great success.

My goal is to still have a VM Windows and somehow use GPU pass through while running Linux as a host. Not interested in a VM server, meaning ESXi wouldn't do me any good.

Very much interested in an R7 1700 build.

Yes it is very much possible.

I've been gaming under KVM (unRAID) for the last 8 months. Subjectively looks like about 5% performance hit vs normal Windows. Basically I cant tell ;-)

Used both AMD RX 480 and GTX 960, with equal success. Now been using RX 480s in XFire. Seems to work ok. Needs more testing.
 
Would KVM be a viable option for gaming on a Windows VM? Going 100% Linux on my gaming desktop would not work for me. I mean my Linux HTPC kind of requires a Windows instance for the heavy weight / not ported titles which I've been streaming with great success.

My goal is to still have a VM Windows and somehow use GPU pass through while running Linux as a host. Not interested in a VM server, meaning ESXi wouldn't do me any good.

Very much interested in an R7 1700 build.


There are plenty of articles out there of people who have gone this route.

If a command line scares you (not saying it does, but it does for many people) KVM can be a bit intimidating though, but there are plenty of guides of how to set this up. As always, passing through Nvidia GPU's is very difficult (but not impossible like in ESXi), but AMD GPU's should work. This is all unsupported stuff though, so your milage may vary.
 
For now, IOMMU seems to be an issue though, which kinda negates possibility of using it for Windows gaming VM ?
 
Would KVM be a viable option for gaming on a Windows VM? Going 100% Linux on my gaming desktop would not work for me. I mean my Linux HTPC kind of requires a Windows instance for the heavy weight / not ported titles which I've been streaming with great success.

My goal is to still have a VM Windows and somehow use GPU pass through while running Linux as a host. Not interested in a VM server, meaning ESXi wouldn't do me any good.

Very much interested in an R7 1700 build.

A quick guide here:

 
  • Like
Reactions: dgz
like this
For now, IOMMU seems to be an issue though, which kinda negates possibility of using it for Windows gaming VM ?

Hmm. I hadn't read this anywhere? Where did you see it?

I guess I just kind of assumed Ryzen would have IOMMU enabled, since all other recent AMD chips did.

The linked article says AMD-v is enabled, but that's only half the equation.
 
Have their been any Hyper-V VM tests?

Hyper-V compatibility is based off of Microsoft's HAL and base drivers so Ryzen should work just fine under Windows 10 and Server 12/16 Hyper-V. Linux based OS are the only ones that require an update to the kernel to work with Ryzen. Possibly some other treaks will be needed for AMD-V as well for VMware.
 
Hopefully they get this patched quickly. VM's is one of the selling points of 8 Core Ryzen chips.
 
Hopefully they get this patched quickly. VM's is one of the selling points of 8 Core Ryzen chips.


I agree. I got tired of constantly fighting my ESXi box though. Unpatched errors, etc. in the free version, and obstructing pass-through meant it drove me nuts. Much happier now with Proxmox.
 
  • Like
Reactions: DPI
like this
I agree. I got tired of constantly fighting my ESXi box though. Unpatched errors, etc. in the free version, and obstructing pass-through meant it drove me nuts. Much happier now with Proxmox.
Can you please tell me if proxmox has a free version?

Their site is not too clear on that.
 
Can you please tell me if proxmox has a free version?

Their site is not too clear on that.

Yep, there is a free version.

The only thing you have to pay for is the Enterprise version. If you pay for the enterprise version you get access to the enterprise repository of updates. If not you can only update from the untested branch, which is still fairly decent in the stability department though.

Personally, I rely on my server for many things around the house though, so I sucked it up and paid for the cheapest license. it is €5,83 per socket per month, so it wound up being ~$148 for the year, as I have a two socket server. I thought it was kind of pricy for a community license, but I have found it to be a good investment. I am pretty glad to no longer be using ESXi.

If the fee is too much, the free edition really isn't bad.
 
Yep, there is a free version.

The only thing you have to pay for is the Enterprise version. If you pay for the enterprise version you get access to the enterprise repository of updates. If not you can only update from the untested branch, which is still fairly decent in the stability department though.

Personally, I rely on my server for many things around the house though, so I sucked it up and paid for the cheapest license. it is €5,83 per socket per month, so it wound up being ~$148 for the year, as I have a two socket server. I thought it was kind of pricy for a community license, but I have found it to be a good investment. I am pretty glad to no longer be using ESXi.

If the fee is too much, the free edition really isn't bad.
I dont need much and it will be on a simple pc.

How about Xen?

And thanks for the info!
 
I dont need much and it will be on a simple pc.

How about Xen?

And thanks for the info!

That's one I haven't used yet. While it was a big player early on, I get the impression it has fallen behind as of late.

ESXi was the biggest thing for years, but in the last few years people seem to be moving to KVM, whether on top of a traditional Linux box, or via a dedicated virtualization distribution like Proxmox.

I'm sure someone else here has some experience they can share though.
 
Thanks for the links and guids, people. I've been stuck with my 2500k for the last six years. Had C2D for some time before that. AMD has been supporting IOMMU on most of their line for quite a while now, thought their CPUs were kind of slow back then. To virtualize everything has been a dream of mine for more than a decade now. Maybe now's the time.
 
There are plenty of articles out there of people who have gone this route.

If a command line scares you (not saying it does, but it does for many people) KVM can be a bit intimidating though, but there are plenty of guides of how to set this up. As always, passing through Nvidia GPU's is very difficult (but not impossible like in ESXi), but AMD GPU's should work. This is all unsupported stuff though, so your milage may vary.

Eh, I kind of like my 1070 and I wouldn't want to go AMD on the GPU side.
I am not scared of CLI. Things can be automated.
 
Has anybody been able to test with VMware Workstation?

I was considering upgrading my current i5 setup and a Ryzen 5 or 7 looks like a cost effective solution.

Particularly interested in whether nested Hypervisors would work(for VMware ESXi labs etc)
 
Has anybody been able to test with VMware Workstation?

I was considering upgrading my current i5 setup and a Ryzen 5 or 7 looks like a cost effective solution.

Particularly interested in whether nested Hypervisors would work(for VMware ESXi labs etc)

I don't see why Workstation wouldn't work. As long as you have AMD-V (AMD's equivalent to Intel's VT-x) and a desktop VMWare workstation just seems to work on everything
At least that's been my experience.

I don't use it much anymore though. I had some guest OS:es that just did not like VMWare, so I tried Virtual PC instead. I was happy with it and have been using it ever since.
 
I'm posting not to report anything on Ryzen compatibility, but rather to counter the notion that it is impossible to get NVidia Geforce cards working with ESXi. If you add this line to the VM's .vmx config file:

hypervisor.cpuid.v0 = "FALSE"

you can trick the NVidia driver into loading normally by fooling it into thinking it's running on bare metal. I currently have a Dell Precision T5500 running ESXi with a Geforce 670 in a Win8.1 Pro VM.
 
Ubuntu 17.04 daily, ASUS PRIME-X370 PRO. No IOMMU whatsoever, /sys/kernel/iommu_groups is empty. SVM is enabled in BIOS.
 
Ubuntu 17.04 daily, ASUS PRIME-X370 PRO. No IOMMU whatsoever, /sys/kernel/iommu_groups is empty. SVM is enabled in BIOS.
On my ASUS B350 I had to turn on IOMMU in another menu in the UEFI seperate from SVM.
 
On my ASUS B350 I had to turn on IOMMU in another menu in the UEFI seperate from SVM.

Yeah, found it. ASUS is lying in the manual. It was in Advanced>AMD CBS submenu, and not in CPU Configuration. One has to wonder which side is to blame - the writers of the manual, or the BIOS developers.

Grouping is still hard to identify though, seems like ACS patch is helping with that :
Code:
IOMMU group 7
   00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1460]
   00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1461]
   00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1462]
   00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1463]
   00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1464]
   00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1465]
   00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1466]
   00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1467]
IOMMU group 5
   00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1452]
   00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1454]
   29:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]
   29:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
   29:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Device [1022:1457]
IOMMU group 3
   00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1452]
IOMMU group 1
   00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1452]
IOMMU group 6
   00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59)
   00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU group 4
   00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1452]
   00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1454]
   28:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]
   28:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Device [1022:1456]
   28:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:145c]
IOMMU group 2
   00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1452]
IOMMU group 0
   00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Device [1022:1452]
   00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:1453]
   03:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b9] (rev 02)
   03:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b5] (rev 02)
   03:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b0] (rev 02)
   1d:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
   1d:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
   1d:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
   1d:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
   1d:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
   1d:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b4] (rev 02)
   23:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 710B] [10de:128b] (rev a1)
   23:00.1 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
   25:00.0 USB controller [0c03]: ASMedia Technology Inc. Device [1b21:1343]
   26:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)

 
I'm posting not to report anything on Ryzen compatibility, but rather to counter the notion that it is impossible to get NVidia Geforce cards working with ESXi. If you add this line to the VM's .vmx config file:

hypervisor.cpuid.v0 = "FALSE"

you can trick the NVidia driver into loading normally by fooling it into thinking it's running on bare metal. I currently have a Dell Precision T5500 running ESXi with a Geforce 670 in a Win8.1 Pro VM.

I guess you mean possible, not impossible? I wish I had seen this back when I was trying it.
 
Back
Top