VT-D/VGA Passthrough?

I'll have to look closer at it. I think my main thing is figuring out the passthrough. I need to pass the OI VM the sata card and my HTPC the video card.

It would be nice if I can have Xen on the OI (zfs) install, but I don't know if thats possible.

Xen needs kernel-level support on the host OS. I don't think there's support for it in the Solaris/Illumos kernel.
 
I thought Xen for passthrough required enterprise, atleast thats what I thought I had seen on the site for it.

Will have to go check again
 
I thought Xen for passthrough required enterprise, atleast thats what I thought I had seen on the site for it.

Will have to go check again

You might be looking at XenServer, a commercial product based on Xen. Xen itself (with all of its features) is completely free and open source. Can be installed from most distros with apt-get, yum or your other package manager of choice.
 
Ah cool, thanks.
Does the XenCentre work with that too? Or is that only the commercial stuff.
I want some sort of graphical interface as once everything is setup its much easier to monitor!

Im tempted to give this all a go, seems to be mixed reviews of how successful peoples setups are though for 1080p playback via DXVA

Just got to find a half decent motherboard for this! With a bunch of PCIe slots and maybe onboard dual lan!
 
Hey all! First post here and I decided to join based on the quality of this thread (and all the others I have read while searching around :D)

Anyway, getting to the point: I would like to show you my VT-d setup.



This machine is an ITX form factor with an i7-2600 on an ASrock H67M-itx/ht board. I have an HD5850 passed through and working just fine in a windows VM (using to type this now). I am running Xen 4.1.2 with Fedora 16 LXDE spin as my dom0 and I have another F16 domU for minecraft and all my other server needs.

Something interesting I found was that the devices do not need to be hidden from the dom0 with pci-back (which didn't work for me anyway). I found a nifty little work-around using libvirt. I was using virt-manager to monitor my VMs and I found that even if the device could not be passed through with virt-manager, it was still being bound to pci-back and would show up with # xm pci-list-assignable-devices. After doing some research I was able to figure out that
Code:
# virsh nodedev-list
and
Code:
# virsh nodedev-dettach pci_xxxx_xx_xx_x
would allow
Code:
# xm pci-attach domU xx:xx.x
to work.

I am still sorting out some other issues and had to disable NetworkManager and create a new bridge to get everything working properly in networking.

I am a long time member of overclock.net and I have a good little guide on there because I think this technology is just too cool to dismiss. I have had a few people try it with varying degrees of success and am always trying to improve my methods. What I really want to do is create a targeted F16 live-usb that will allow users to use it without having to format drives. I have been able to make the live-usb without much of a problem, but I can't seem to figure out how to modify the standard boot-loader to grub2 for multiboot and xen. Any help there is appreciated.

Anyway, thanks for reading and I am glad to be here (finally)
 
Last edited:
I had to blacklist the original driver to get pciback to claim the device. libvirt works because it actually unbinds the original driver before you start the machine, it's not so different than what the guy did earlier in the thread - it's in his script in one of the pictures.

Video was actually never a problem for me with hardware passthrough, it was the other stuff that tended not to work. Wifi cards in my case, tuners also notoriously don't work. PV passthrough works great for me but some people may not be content with limitations.
 
I'm thinking of building a setup like this. Would love to have my workstation running linux but at the same time I would like to be able to easily switch to my Windows VM for the occasional gaming session.

How easy is it to switch between the two VMs?

GPU would be one 5870 with 3 screens. So I guess I would need to switch the GPU between the two systems, would that require a reboot? In that case I might as well just dual-boot...
 
What do you mean by "switching" the GPU? You have to treat the host and guests as completely different machines, they really don't know about each other's existence. The tools available for communication between one another are the same as available between two discreet hardware machines.

These guys installed Linux which uses its own driver for the video card, they then unbind the driver, that turns the card off for the host; then bind the passthrough driver, which doesn't function as a video driver on the host; and then assign it to the guest at which point you can bring up the guest and it will have the GPU. To switch back you have to do the reverse starting with bringing down the guest on the host while it does not have a functioning video driver.

You could do it with scripts, which is how they bring it up, but this is really way too complicated and fairly time consuming once you do get it to work and, most importantly, for what? What do you want to do on the host while having the guest up and video through the guest that you cannot do otherwise with ssh and X forwarding?
 
I ment switching as you describe in your second paragraph.

If you read my post again you can see that I talk about two VMs. One would run Linux and the other Windows. I wouldn't want to use the host as my workstation though.

What I would like to achive is a scenario where I can run my linux VM on all three screens and by issuing a command to the host it would switch the GPU over to my Windows VM. But from what you describe I need to shut down the guests while switching over the GPU and if that's the case I might as well dual boot.

EDIT: One option that would do exactly what I want is to have a separate card for each VM. But then I would need to change my monitors between the two cards. Could work with a monitor switch. But I would prefer to do it on one card...
 
Last edited:
Whether it's the host or another guest it doesn't matter much.

The most seamless integration at this point is to use windows and X forward any application you want to run in linux. Your linux machine will be running off the emulated video. And then you arrive at the inevitable conclusion that there's little point in not having windows be the host with something like vbox running guests.

Two cards will work and will drive separate monitors if you want it to, in which case I'd suggest either dividing up the monitors or getting some more. Switching even off the buttons on the monitor takes a while and then there's also the problem of what you want to do with controls - the first guy passed through USB controller, the second probably has done the same. If you want two guests driving two sets of monitors directly you'll need either a KVM switch or software like Synergy or straight up two sets of kb+m. Of these Synergy is preferred for usability standpoint, but there will be things you'll miss - like you will want to copy paste something from one desktop to another and obviously that won't work. You won't have that if you X forward.

Basically all I'm trying to say is, it's great to theorize about these things before you get down to practice. I tried these things, in practice you want one machine to handle all video. And the ugly truth is if you want to game, that machine has to be windows. For practical purposes communication between discreet hardware just isn't up to where you want it to be in terms of feel, or at least miles behind running an X server on windows and X forward
 
For 1 xen barbone with a "virtualized" workable workstation you bassicly need:
- CPU "time"
Dedicated "hardware" (as in not usable for other guest/domU systems)
- RAM (only if you passthrough hardware, so if you allocate 4gb then its "gone" and only usable for that DomU)
- GPU (ATI cards work best
- fysical Nic (had issues with virtual nic)
- HD space (DUH)
- USB hub (different brands make it easier to Identify who gets what)
- motherboard/usb Sound device

My setup has been working for 6 months now. debian wheezy (3.0 kern) with xen 4.1

I have an usb/network printer (usb for scanning) and slimline external DVD drive what is connected to the host that needs it. what includes Dom0

I have 2 Win7 workstations that each have their dedicated USB Hub, an ATI Readon HD 6xxx card (2 monitors per host) for sound 1 host has a USB 10 bucks(euro) sounddongle(sweexs) other one has the internal soundcard attached.

Then i have other domU hosts that do server like thingies ;)
For gaming watching movies (dvd/hd 1080p) or using teamspeak no problems found.

my youtube vid


The problems I do run in are:
- I needed to attach a real NIC, a virtual nic got to much intermittend connectivity (or non)
- On 1 of the Workstations the usb restarts at random .. I can work for hours then it restarts. I haven't found if its the Win7 OS or some Xen related issue. Since its not easy to reproduce its hard to figure out.
- CPU (i7 980) does not get detected correctly by dom0 or xen (have to research)
- I now use a DD-ed .img file as a virtual disk. I would like to switch to a Virtual drive type ..
- Networking is also quite "difficult" as in It works now for me but i would like to use virtual switch.
- virtmanager is kind a buggy. (not al stats are shown .. disk/network load)

I saw that Virsh is also usable for "disconnecting" devices from the "host" I now use PCI-Stub.

Btw for the everybody who use passthrough the whole USB controller, use like I did attach a USB hub instead.
 
I signed up to the forum based on this thread, like dizzy above.

My aim is to get Linux Mint 13 Maya 64 bit working as Xen dom0 with Windows 7 as domU. I have only 1 graphic card, a PNY Quadro 600 (Nvidia), and one screen.

I know that dizzy has a nice tutorial for Fedora 16, but I would rather have Linux Mint for a Linux desktop.

I would be perfectly happy to boot into the Windows domU and access the Linux Mint dom0 via ssh and X window.

I run my day to day stuff on Linux but I need direct GPU access for editing photos. Unfortunately all professional photo editing software comes either on Windows or Mac. Plus I need to be able to calibrate my screen and upload the calibration data (LUT) via the DVI connector of the GPU.

I have an Asus Sabertooth X79 board with an i7 3930K CPU with VT-d support and 32GB RAM.

I like to reserve 5 cores for Windows for the CPU intensive photo editing stuff and 1 core for Linux, which is mainly Internet, OpenOffice, and email.

Any ideas Linux Mint will work?

I've installed LM13 with the Xen hypervisor (boot time is still an issue). I also use LVM for /, /home, and swap.

Will that setup work with 1 GPU and ssh for access to the Linux dom0?

Cani change the CPU core reservation on-the-fly if I need to rip my DVDs for my video streamer?

Any suggestions are most welcome! Very interesting thread!
 
My PC is now up and running Linux Mint 13 as dom0 and Windows 7 Pro 64bit as domU. It works perfectly, well almost. LAN network speed on both dom0 and domU as still subnormal.

However, Windows on Linux/Xen works like on steroids. Graphics, disk, CPU, RAM are all like native, perhaps even better.

Like dizzy4 did for Fedora 16 I wrote a how-to for Linux Mint users (should be working with Ubuntu 12.04 and perhaps Debian too) - see http://forums.linuxmint.com/viewtopic.php?f=42&t=112013.

Hope you like it.
 
Back
Top