Xen hypervisor users: what "cool" things have you done / built?

darkdub

Weaksauce
Joined
Mar 22, 2011
Messages
102
There are an overwhelming number of discussions on ESX(i), and similarly so for desktop-based products. Where are all of the Xen users?

Have you used Remus to implement "RAID-1" VMs, or taken the next step and utilized this successfully over a lower-bandwidth WAN link? What were the challenges you faced? How well does it work?

Have you successfully set up PCI-passthrough for a newer video card to create a virtualized system for gaming? What about setting up two high-performance virtual gaming desktop environments from one physical machine? How about crossfire / SLI? In practice, what were the bottlenecks?

Done something else that isn't done easily with another virtualization platform? Talk about it here!
 
To start off the commentary... My current project is to create a high-performance desktop / gaming experience for my wife and myself using two high-end graphics cards in one PC.

The plan is to use the fastest consumer-grade hardware I can easily get my hands on, within reason. It's currently based around the socket-1155 platform with a i7-2700K processor (overclocked for the fun of it), 32 GB RAM, 2 x 7970 graphics cards, a 1200 watt power supply (overkill for the moment, but I want to experiment with pasthrough-crossfire), four 128GB-SSDs and four 1TB 7200RPM drives.

The drives will be configured on the Linux host as RAID-10 (software-based), allocating basically half of the available storage to each desktop environment. In this way, there should be a significant speed advantage when one of the desktop environments is utilizing the disks, and adequate performance when used in tandem.

The end result, hopefully, should allow the playing of games like Diablo III, Starcraft, Battlefield 3, etc. etc. in fairly high quality on both "systems" at once.

How the physical Blu-ray drive will be shared is still a mystery... :)

I realize going in to this project that simply splitting the hardware between two systems would grant superior performance, but the question is, how much so? Games don't typically utilize all 4 cores + 4 hyperthreads at once. There is a penalty for the abstraction, but various benchmarks (I'll try to update later with links) seem to indicate that this is minimal. In fact, when only one "system" is in use, I expect superior performance from the storage environment. It should be interesting to see how much so.
 
Intreiging idea! 1 question though, I thought that the "k" designation i7's had the VT-d flag disabled. You specify an i7-2700k, you may need to rethink/research this before you buy.
 
To start off the commentary... My current project is to create a high-performance desktop / gaming experience for my wife and myself using two high-end graphics cards in one PC.

The plan is to use the fastest consumer-grade hardware I can easily get my hands on, within reason. It's currently based around the socket-1155 platform with a i7-2700K processor (overclocked for the fun of it), 32 GB RAM, 2 x 7970 graphics cards, a 1200 watt power supply (overkill for the moment, but I want to experiment with pasthrough-crossfire), four 128GB-SSDs and four 1TB 7200RPM drives.

The drives will be configured on the Linux host as RAID-10 (software-based), allocating basically half of the available storage to each desktop environment. In this way, there should be a significant speed advantage when one of the desktop environments is utilizing the disks, and adequate performance when used in tandem.

The end result, hopefully, should allow the playing of games like Diablo III, Starcraft, Battlefield 3, etc. etc. in fairly high quality on both "systems" at once.

How the physical Blu-ray drive will be shared is still a mystery... :)

I realize going in to this project that simply splitting the hardware between two systems would grant superior performance, but the question is, how much so? Games don't typically utilize all 4 cores + 4 hyperthreads at once. There is a penalty for the abstraction, but various benchmarks (I'll try to update later with links) seem to indicate that this is minimal. In fact, when only one "system" is in use, I expect superior performance from the storage environment. It should be interesting to see how much so.

Wouldn't you be better off doing this on LGA2011?
* 8 ram sockets
* 6 cores + HT
* Better hardware pass-through support.
 
Wouldn't you be better off doing this on LGA2011?
* 8 ram sockets
* 6 cores + HT
* Better hardware pass-through support.

^^All of this. Not to mention the variety of headaches with vt-d on 1155 (just google it if you want to see the wide array of hit/miss - much more miss than hit - to get this type of solution working).
 
Intreiging idea! 1 question though, I thought that the "k" designation i7's had the VT-d flag disabled. You specify an i7-2700k, you may need to rethink/research this before you buy.

Very interesting. I actually hadn't realized the "K" designation had VT-d disabled. The main difference I was aware of was the lack of vPro in the "K" models, and the locked multipliers in the non-"K"'s.

For my purposes, the lack of remote management wouldn't be a big deal, but not having VT-d kind of kills the whole idea...

Thanks for the heads up!
 
...
* Better hardware pass-through support.

Would you mind elaborating on this? Two comments to this effect. If there are significant issues with VT-d on the 1155 platform, I'd be better served spending the additional money moving to the 2011 socket.
 
Software based raid ?

I should have been more specific... software-based LVM RAID. I want TRIM support for the SSD's, and I'm not aware of any inexpensive hardware controllers with this. Perhaps the landscape has changed, and if so I'd gladly eat my words.

For enterprise environments, I'd be among those extolling the virtues of hardware-based RAID cards with battery-backed write caching and the host of other features they provide, beyond offloading from the main processors. Software RAID isn't given nearly enough credit though.
 
you should try proxmox ? that is awesome software, i've been contemplation on switching to it from xen..
 
you should try proxmox ? that is awesome software, i've been contemplation on switching to it from xen..

I looked into KVM (which proxmox uses) vs. Xen virtualization. From what I understand, up until very recently between the two only Xen was able to do full PCI-E passthrough for graphics cards. Apparently this has been resolved, but the kernel version distributed with the current stable proxmox (2.0) doesn't yet support it.

Either way it's going to involve some kernel hackery (or compilation at the very least), and when it comes down to it, I'm just more familiar with Xen.

Definitely appreciate the suggestion though!
 
I looked into KVM (which proxmox uses) vs. Xen virtualization. From what I understand, up until very recently between the two only Xen was able to do full PCI-E passthrough for graphics cards. Apparently this has been resolved, but the kernel version distributed with the current stable proxmox (2.0) doesn't yet support it.

Either way it's going to involve some kernel hackery (or compilation at the very least), and when it comes down to it, I'm just more familiar with Xen.

Definitely appreciate the suggestion though!

Reason, why i might move, is because xen needs a client to access & do things with, same with ESXi, but Proxmox is browser based 100% and works on most browsers. Tried it and was very happy, it was actually pretty easy to setup and get working too..

Dash.
 
Reason, why i might move, is because xen needs a client to access & do things with, same with ESXi, but Proxmox is browser based 100% and works on most browsers. Tried it and was very happy, it was actually pretty easy to setup and get working too..

Dash.

Is it based in Java, flash or HTML5? Just curious if there's any special plug-ins needed? Was just curious about possibly managing via a smart phone browser. Nice concept for remote administration.
 
I couldn't find much on Xen's 3D acceleration passthrough - will this actually work for DX9+ titles?
 
Is it based in Java, flash or HTML5? Just curious if there's any special plug-ins needed? Was just curious about possibly managing via a smart phone browser. Nice concept for remote administration.

Well i was able to administrate it with my ipad and iphone :)

My *guess* would be html based.

On main reason why i might switch to is,because i don't need a client at all.
 
I couldn't find much on Xen's 3D acceleration passthrough - will this actually work for DX9+ titles?

Actually, yes. With ATI cards, it works very well, with only a slight drop in performance vs. bare metal. For an example, see this video:

http://www.youtube.com/watch?v=L_g7ZBMWoLk

It's an example of two "desktops" powered off of one physical machine, with two graphics cards. This is similar to what I want to build, but I intend to try CrossfireX and high performance drives RAIDed.
 
Reason, why i might move, is because xen needs a client to access & do things with, same with ESXi, but Proxmox is browser based 100% and works on most browsers. Tried it and was very happy, it was actually pretty easy to setup and get working too..

Dash.

That's a fair point. Xen by itself (Not XCP, XenServer, etc.) is not as user friendly. I'm a bit of a terminal junkie, having cut my teeth on MS DOS and early *nix systems, so I somewhat prefer the clean text-based management. =)

If you do go down the proxmox route, please share your experience, especially if you spend any time around graphics passthrough.
 
If you're doing high-end games, I would go after a 6 or 8-core hyperthreaded CPU -- or even dual 4-core (or more) CPUs for SMP-based virtualization. 32GB of RAM sounds good enough for 2 person host.

Just curious -- how are you going to do the monitors, keyboards, and mice? :?
 
Actually, yes. With ATI cards, it works very well, with only a slight drop in performance vs. bare metal. For an example, see this video:

http://www.youtube.com/watch?v=L_g7ZBMWoLk

It's an example of two "desktops" powered off of one physical machine, with two graphics cards. This is similar to what I want to build, but I intend to try CrossfireX and high performance drives RAIDed.

I'll have to check that out, thanks. I should have mentioned that I did see the youtube videos, but no youtube at work.
 
If you're doing high-end games, I would go after a 6 or 8-core hyperthreaded CPU -- or even dual 4-core (or more) CPUs for SMP-based virtualization. 32GB of RAM sounds good enough for 2 person host.

Just curious -- how are you going to do the monitors, keyboards, and mice? :?

For any passthrough virtualization, its pretty straight forward - pass a vga adapter and a usb hub to a guest vm, and voila - non-virtualized desktop I/O.

That being said, there's a lot of caveats to getting this to play nicely, but definitely doable (typing this currently on an esxi box that is passed through in that very manner).
 
For any passthrough virtualization, its pretty straight forward - pass a vga adapter and a usb hub to a guest vm, and voila - non-virtualized desktop I/O.

That being said, there's a lot of caveats to getting this to play nicely, but definitely doable (typing this currently on an esxi box that is passed through in that very manner).

Have you run any sort of standard benchmarks on your hardware to compare performance virtualized vs. bare metal? I'd love to see some numbers for ESXi passthrough performance.

I wasn't able to locate anything quickly showing performance comparisons for this with ESXi.
 
I wanted to try Xen, but don't you need the full enterprise version for this? Atleast I did when I tried it a few months ago. I don't have access to a student/lab license.


If you're doing high-end games, I would go after a 6 or 8-core hyperthreaded CPU -- or even dual 4-core (or more) CPUs for SMP-based virtualization. 32GB of RAM sounds good enough for 2 person host.

Just curious -- how are you going to do the monitors, keyboards, and mice? :?
Just pass through the USB devices. Monitors will each be on their own GPU.

You guys are aiming at some serious hardware. Most of the ESXi based servers I've seen use about 10% cpu and 100% of the ram. Granted, this is going to be used for gaming. However, giving each vm 2 physical cores and 2 logical should be plenty. How many of the new games need 4 logical cores unless you are running sli/xfire?
 
I get the passthrough of VGA, USB, etc. But what client is used to connect to the Gaming VMs?
 
I get the passthrough of VGA, USB, etc. But what client is used to connect to the Gaming VMs?
No you don't get it.

Output comes out of the passed-through graphics card and input comes in via USB.
 
I wanted to try Xen, but don't you need the full enterprise version for this? Atleast I did when I tried it a few months ago. I don't have access to a student/lab license.

Xen, by itself, is an open source tecnhology. Citrix has a commercial offering, called XenServer, that also offers up a limited free version. XCP, or Xen Cloud Platform, is the open-source counterpart to XenServer (but with a different numbering scheme for versions)... XCP 1.5 is roughly equivalent to XenServer 6, IIRC.
 
Have you run any sort of standard benchmarks on your hardware to compare performance virtualized vs. bare metal? I'd love to see some numbers for ESXi passthrough performance.

I wasn't able to locate anything quickly showing performance comparisons for this with ESXi.

I haven't run a bare metal vs virtualized comparison on the same hardware, as the esxi box I'm running this on has run esxi since it was first loaded up, and I haven't really had much of a chance to play with it since getting severals vm's initialized on it.

That being said, while I can't give a good relative comparison, I can say that it functions pretty seamlessly and there's not a noticeable/significant performance hit. W7-64 vm off a ssd has benchmarks right in the same spots as I've benched the drive native in the past (crucial m4), and a Radeon 6850 gives excellent performance with no hiccups and with framerates as I'd expect. It also has no issue driving 3 monitors (PLP setup for productivity) and passing sound through DP, so the vt-d solution seems to be fairly robust. As far as input, there is no USB lag like there would be for virtualized mouse through remote console - the whole experience really feels exactly the same as if you were sitting there with a dedicated bare metal box.

I apologize that this is much more qualitative than hard numbers, but I haven't had the chance to load up a separate disk with a fresh w7 install (as well as downing the other vm's I have running on there).
 
Last edited:
I am very interested in this as I would love to only have 1 computer, but the ability to have 2 separate OS's and play games on them like you plan to.

Subbed!
 
For 1 xen barbone with a "virtualized" workable workstation you bassicly need:
- CPU "time"
Dedicated "hardware" (as in not usable for other guest/domU systems)
- RAM (only if you passthrough hardware, so if you allocate 4gb then its "gone" and only usable for that DomU)
- GPU (ATI cards work best
- fysical Nic (had issues with virtual nic)
- HD space (DUH)
- USB hub (different brands make it easier to Identify who gets what)
- motherboard/usb Sound device

My setup has been working for 6 months now. debian wheezy (3.0 kern) with xen 4.1

I have an usb/network printer (usb for scanning) and slimline external DVD drive what is connected to the host that needs it. what includes Dom0

I have 2 Win7 workstations that each have their dedicated USB Hub, an ATI Readon HD 6xxx card (2 monitors per host) for sound 1 host has a USB 10 bucks(euro) sounddongle(sweexs) other one has the internal soundcard attached.

Then i have other domU hosts that do server like thingies ;)
For gaming watching movies (dvd/hd 1080p) or using teamspeak no problems found.

my youtube vid


The problems I do run in are:
- I needed to attach a real NIC, a virtual nic got to much intermittend connectivity (or non)
- On 1 of the Workstations the usb restarts at random .. I can work for hours then it restarts. I haven't found if its the Win7 OS or some Xen related issue. Since its not easy to reproduce its hard to figure out.
- CPU (i7 980) does not get detected correctly by dom0 or xen (have to research)
- I now use a DD-ed .img file as a virtual disk. I would like to switch to a Virtual drive type ..
- Networking is also quite "difficult" as in It works now for me but i would like to use virtual switch.
- virtmanager is kind a buggy. (not al stats are shown .. disk/network load)

I saw that Virsh is also usable for "disconnecting" devices from the "host" I now use PCI-Stub.

Btw for the everybody who use passthrough the whole USB controller, use like I did attach a USB hub instead.
 
For 1 xen barbone with a "virtualized" workable workstation you bassicly need:
- CPU "time"
Dedicated "hardware" (as in not usable for other guest/domU systems)
- RAM (only if you passthrough hardware, so if you allocate 4gb then its "gone" and only usable for that DomU)
- GPU (ATI cards work best
- fysical Nic (had issues with virtual nic)
- HD space (DUH)
- USB hub (different brands make it easier to Identify who gets what)
- motherboard/usb Sound device

My setup has been working for 6 months now. debian wheezy (3.0 kern) with xen 4.1

I have an usb/network printer (usb for scanning) and slimline external DVD drive what is connected to the host that needs it. what includes Dom0

I have 2 Win7 workstations that each have their dedicated USB Hub, an ATI Readon HD 6xxx card (2 monitors per host) for sound 1 host has a USB 10 bucks(euro) sounddongle(sweexs) other one has the internal soundcard attached.

Then i have other domU hosts that do server like thingies ;)
For gaming watching movies (dvd/hd 1080p) or using teamspeak no problems found.

my youtube vid


The problems I do run in are:
- I needed to attach a real NIC, a virtual nic got to much intermittend connectivity (or non)
- On 1 of the Workstations the usb restarts at random .. I can work for hours then it restarts. I haven't found if its the Win7 OS or some Xen related issue. Since its not easy to reproduce its hard to figure out.
- CPU (i7 980) does not get detected correctly by dom0 or xen (have to research)
- I now use a DD-ed .img file as a virtual disk. I would like to switch to a Virtual drive type ..
- Networking is also quite "difficult" as in It works now for me but i would like to use virtual switch.
- virtmanager is kind a buggy. (not al stats are shown .. disk/network load)

I saw that Virsh is also usable for "disconnecting" devices from the "host" I now use PCI-Stub.

Btw for the everybody who use passthrough the whole USB controller, use like I did attach a USB hub instead.

Good stuff! It's great to see others having built this. I truly see a class of desktop machines heading in this way in the next two-to-three years. The performance hit for virtualization is getting closer to 0.

Even gaming systems (all but the most high-end) could be built in this way. Imagine a family that has four separate decently-powered computers. For the cost of two of those systems, they could build a single higher-powered system that could host each of the individual "desktops".

How often do average users REALLY utilize all 4 + 4 cores of a newer processor? Or when gaming, not many even split the processing into threads.

When used singularly, a system set up like this would have more resources available than individual. When all are in use at once, the performance would presumably still be acceptable.

Win / win! Except for when there's a hardware problem... =)
 
...
The problems I do run in are:
...
- I now use a DD-ed .img file as a virtual disk. I would like to switch to a Virtual drive type ..
...

Wanted to ask... When you say this, are you basically saying that you're using file images rather than LVM for your storage? If so, how is the performance? Are you running the GPL PV drivers for Windows?

Also, how do you handle CPU resource sharing? Do you basically give both desktops all cores and use CPU pools / weighting? This is the approach that in theory I'd like to implement so that once system could potentially use all of the CPU if the other was dormant.
 
Good stuff! It's great to see others having built this. I truly see a class of desktop machines heading in this way in the next two-to-three years. The performance hit for virtualization is getting closer to 0.

Even gaming systems (all but the most high-end) could be built in this way. Imagine a family that has four separate decently-powered computers. For the cost of two of those systems, they could build a single higher-powered system that could host each of the individual "desktops".

How often do average users REALLY utilize all 4 + 4 cores of a newer processor? Or when gaming, not many even split the processing into threads.

When used singularly, a system set up like this would have more resources available than individual. When all are in use at once, the performance would presumably still be acceptable.

Win / win! Except for when there's a hardware problem... =)
I'm really liking this thread. I have thought about this very lightly in my teenage years as a potential idea to try out.

Of course, now with this thread, this is getting me to brainstorm about the future when I might have a family. Perhaps when I have a family with kids, I'll have a big bertha machine and use Raspberry Pi's as Thin Clients to VDIs back to the same machine. Just think of it: no more having a 3-year old trash a perfectly good and working desktop PC that costs $700-2500. Instead, they've got a $25 Raspberry Pi to break. :D

Have a house wired with CAT 6A. :cool:
 
Last edited:
Wanted to ask... When you say this, are you basically saying that you're using file images rather than LVM for your storage? If so, how is the performance? Are you running the GPL PV drivers for Windows?
Storage is raid 5 MDADM(no tuning yet) with a ext3 volume .. on top of that the .img files(or vhd for win).
And yes I am running GPL PV drivers (all windows machines have those drivers)

I am running for 6 months now, performance compairs to barebone let me explain in your next question.

Also, how do you handle CPU resource sharing? Do you basically give both desktops all cores and use CPU pools / weighting? This is the approach that in theory I'd like to implement so that once system could potentially use all of the CPU if the other was dormant.
I have 6 cores what is 12 in HT
12 cores are dived into 3 sets (2 per workstation (what has fysical monitor,keyb.mouse etc)
The 8 remainin are "dedicated" for the remaining pool (dom0 and other domU)

When you use VT-D for hardware all "resources" you basicly dedicate = CPU, RAM = static so they are excluded from "pool rotation". so RAM = gone per WS
Since I can have 48 GB of ram in the machine I see it as a minor problem.

Overal performance is I think Ok. (no benchmarks) I gained performance because a "disk" on RAID5 array speed, lost some performance because I virtualized .. Overal I think a + in performance compaired to a workstation barebone with normal HD
For me the following gaines have been made possible: if you include it reduced 2 workstations, 1 ESX(i) or Hyper-v and a NAS (nas will be bought again though
+ can add CPU/RAM " on the fly
+ Less power usage
+ less spend on a new machine
+ can dualboot and/or switch with ease between any OS (mac-os/linux/windows)


aronesz said:
Of course, now with this thread, this is getting me to brainstorm about the future when I might have a family. Perhaps when I have a family with kids, I'll have a big bertha machine and use Raspberry Pi's as Thin Clients to VDIs back to the same machine. Just think of it: no more having a 3-year old trash a perfectly good and working desktop PC that costs $700-2500. Instead, they've got a $25 Raspberry Pi to break.

Have a house wired with CAT 6A.
Think again, you are talking about a Terminal server setup.. what is not the same as what I have. (you want same as remote desktop to another computer)
If you want to do what I do .. search for KVM extenders (use the cat6 cable to extend the distance between keyboard/mouse/monitor) problem is currently no multimonitor available.
 
Last edited:
Hi everybody!

Is it really necessary to have six cores.
Could you not run four cores + HT all "dedicated"?

^^All of this. Not to mention the variety of headaches with vt-d on 1155 (just google it if you want to see the wide array of hit/miss - much more miss than hit - to get this type of solution working).

I am sorry to say that I haven´t found anything regarding this using Google.
I would really like to know what kind of problems there are.
It isn´t enough then to just check the motherboard manual for vt-d option in bios?
 
Back
Top