How VMware And NVIDIA Are Bringing GPU Sharing To The Masses

HardOCP News

[H] News
Joined
Dec 31, 1969
Messages
0
In the world of virtualization, enterprises have faced what I call a “trade-off of extremes” when it comes to graphics. They could choose either GPU sharing or dedicated GPUs, and both have advantages and disadvantages. With GPU sharing, you get the economic benefits of shared hardware to many users. But limited API support has a big impact on performance. With dedicated GPUs, you get great performance. But it doesn’t scale because each GPU is mapped to a single user. That trade-off is about to end, with the support of NVIDIA GRID vGPU with VMware Horizon 6 built on vSphere. This combination provides virtualized graphics that are rich, immersive and delivered in real time for workers of all types. No matter what device they choose.
 
Funny because their drivers started blocking IOMMU use with the KVM hypervisor. It used to work and all of a sudden it didn't. Nvidia claims it's a coincidence, but in reality, they did it on purpose to try and sell their professional products that they did not gimp. It is for this reason that I am most likely going to AMD for my next card. Unless Nvidia disables the block in their drivers for IOMMU over KVM.

As for VMWare... huh? They are still relevant and make a very good product. VMWare Player beats the hell out of Virtualbox. Not even close.
 
Seems that Citrix was leading this drive. This is the second VMware focused GRID post.
 
Funny because their drivers started blocking IOMMU use with the KVM hypervisor. It used to work and all of a sudden it didn't. Nvidia claims it's a coincidence, but in reality, they did it on purpose to try and sell their professional products that they did not gimp. It is for this reason that I am most likely going to AMD for my next card. Unless Nvidia disables the block in their drivers for IOMMU over KVM.

As for VMWare... huh? They are still relevant and make a very good product. VMWare Player beats the hell out of Virtualbox. Not even close.

Had said issue this weekend. Arch Linux with 3.19-rc6 kernel (i915 and ACS override patches) and QEMU 2.2.0-1 passing through 2x GTX 970s with Nvidia 347.25 drivers (latest) to a Windows 7 Pro x64 VM. This fix worked for me though:

"The Nvidia driver, starting with 337.88 identifies the hypervisor and disables the driver when KVM is found. Nvidia claims this is an unintentional bug, but has no plans to fix it. To work around the problem, we can hide the hypervisor by adding kvm=off to the list of cpu options provided (QEMU 2.1+ required). libvirt support for this option is currently upstream.

Note that -cpu kvm=off is not a valid incantation of the cpu parameter, a CPU model such as host, or SandyBridge must also be provided, ex: -cpu host,kvm=off."

Source: http://vfio.blogspot.com/2014/08/vfiovga-faq.html

Changing the signature is supposedly an alternative method: https://bbs.archlinux.org/viewtopic.php?pid=1421241#p1421241 Too bad I couldn't get SLI working at all though, would have loved to stick with Arch instead of going back to Windows...

I 100% agree with you though, Nvidia is such a money grabber. I was trying to pass a GTX 750 Ti to a VM on my ESXi 5.5 server a couple of months ago and couldn't get past the code 43 issue. Ended up buying an R7 250E since there were tons of posts on the VMware forums about successfully passing AMD Radeon cards to VMs. That card worked so easily, didn't have to do shit past installing the drivers!
 
Nvidia is annoying the piss out of me with virtualization support. If it doesn't make money they aren't gonna do it.
 
Back
Top