All-In-One: XenServer 6.2 vs. ESXi 5.5

Jim G

Limp Gawd
Joined
Jun 2, 2011
Messages
221
Hi guys;

I had an AIO w/ZFS on ESXi 5.1 a little while ago, then split it up into a fileserver machine running on bare metal and a XenServer 6.2 VM machine. All VMs were stored on the filesever, along with everything else for the business.

Now I need to move back to a single machine. I do remember reading somewhere on [H] - a post by Gea perhaps? - that XenServer wasn't well suited to an AIO setup due to really poor I/O performance due to the VM I/O being routed through the dom0, or something like that. I can't find reference to it now.

I'd like to stick with XenServer if possible as I don't really want to convert 20-odd VMs back to ESXi... but if the performance delta is huge I will. I also simply prefer XenServer 6.2, but performance is more of an issue than preference here.
Anyone have any input?
 
I can't give much data to support it and 6.2 might be better but my experience with 6.1 was TERRIBLE performance.
 
I can't give much data to support it and 6.2 might be better but my experience with 6.1 was TERRIBLE performance.

Well. That isn't a promising start! Thanks for the response. I don't suppose you tried ESXi with the same hardware config?
 
I don't run AIO now, but I recall performance being better than a separate fileserver, since the traffic doesn't need to make a network hop...
 
Oh, sorry :) ESXi. I have dabbled a bit with xenserver and always ended up putting it on the shelf due to idiosyncracies I didn't like.
 
Oh, sorry :) ESXi. I have dabbled a bit with xenserver and always ended up putting it on the shelf due to idiosyncracies I didn't like.

Thanks :) After doing some more searching after the replies it's starting to look a lot like it's not worth my time to try out XenServer in an AIO - I'm aiming for a minimum of time to switch so taking the known/safe path is more appealing.
 
Hang on. I was just reading... does the web client really expire after 2 months and the native client won't function for V.10 VMs? Is there any reason not to just use 5.1 assuming <=32GB RAM?
 
Hang on. I was just reading... does the web client really expire after 2 months and the native client won't function for V.10 VMs? Is there any reason not to just use 5.1 assuming <=32GB RAM?

Web client works only with full vsphere license - so it's not free. Free ESXi doesn't have vsphere so after 60 days free vsphere license will not work and you'll have trouble managing v10 VMs.
ESXi 5.5 has advantage over 5.1 in term of memory - free ESXi 5.5 will support more than 32GB ram
 
I seem to recall lopo mentioning something was going to be done to keep the free folks from getting screwed...
 
Also, I strongly recommend for an AIO that you go the route of PCI passthrough, rather than RDM or virtual disks, particularly if you're going to use ZFS. The performance is near-metal, and the storage appliance can get direct access to HW info, hot-plug drives, etc...
 
Also, I strongly recommend for an AIO that you go the route of PCI passthrough, rather than RDM or virtual disks, particularly if you're going to use ZFS. The performance is near-metal, and the storage appliance can get direct access to HW info, hot-plug drives, etc...

I hope there is something for those who want to use the free version for more than 60 days like you used to be able to!

Definitely going with passthrough, thanks :)
 
Keep in mind that you can always create VMs with HW version <= 9. There is very little reason to *need* to create a V10 VM, and just use the thick client.
 
The issue I ran into ESXi (v5.1 - if I'm on-topic)...

Cloning/copying VMs just wan't a feature with the "thick" vSphere client.
- I could create a VM, but then end up re-doing a complete Linux setup each-n-every time.

I was hoping to find a scripted method for doing it - but, just didn't see it in the documentation.

And... that's why I'm looking at Xen for my next hypervisor build. But, the fact that ESXi v5.5 allows >= 32GB ram is very attractive (for the free version).

And Xen+Centos v6.4+ is an attractive combination... something new/shiny that's not too easy...
 
I am confused. I routinely clone linux guests using esxi. What specifically do you mean?
 
Hang on. I was just reading... does the web client really expire after 2 months and the native client won't function for V.10 VMs? Is there any reason not to just use 5.1 assuming <=32GB RAM?

Native client works just fine when connecting to ESXi, but not when connecting to vCenter. Get a free license and you're golden.
 
I like running xen 4.5 on arch Linux. I have no issues with performance at all. Centos 6.4 with xen is nice that you can run xen 4.4 and xapi this way you get xenserver. I am running xen because I have servers with more than one cpu and over 32 gb of ram. Xen has nice text based config files or you can use libvirt or xm. KVM is another choice with direct emulation without micro kernel, that some claim to be better in performance. There is qemu and many other choices.
 
I run XenServer 6.0.2 and 6.2, currently running the Enterprise license. I haven't had but a couple of little quirks, which were fixed with a patch update.
 
I run XenServer 6.0.2 and 6.2, currently running the Enterprise license. I haven't had but a couple of little quirks, which were fixed with a patch update.

In an all-in-one setup?
 
Xen IO performance for an AIO is very poor. Running the file server in a VM means that all of your IO operations have to go to the IO handlers in Dom0 and then back to kernel space and to your FS VM. Just too many trips from user-mode to kernel-mode and back to ever get good performance.

Also, Xen is much less forgiving of delayed availability of its file stores. ESXi (using NFS) is much more tolerant of having your VM storage delayed a bit after startup, which is quite necessary when the file store itself is one of its VMs.

The obvious way to fix this would be to run the file services inside Dom0 itself by installing ZoL or just using MDadm. The barrier to doing this is that Xenserver uses a 32 bit kernel for Dom0 so the larger memory footprint of an efficient file store like ZFS is a non-starter. As of the last time I looked - albeit most of a year ago - there were no immediate plans to support a 64 bit Dom0. There are Linux/XCP builds with a 64 bit Dom0 but then you run smack into tool compatibility & support issues.

Stick with ESXi for AIO. Or Hyper-V. Xen is great for many workloads, but IMNSHO it is just too much trouble to make it work well for AIO.
 
Xen IO performance for an AIO is very poor. Running the file server in a VM means that all of your IO operations have to go to the IO handlers in Dom0 and then back to kernel space and to your FS VM. Just too many trips from user-mode to kernel-mode and back to ever get good performance.

Also, Xen is much less forgiving of delayed availability of its file stores. ESXi (using NFS) is much more tolerant of having your VM storage delayed a bit after startup, which is quite necessary when the file store itself is one of its VMs.

Aha! THAT was the info I was looking for when I started this thread. Thankyou for posting that.

Well, my experience so far has been more positive than I would expect based on the above - the performance with ~10 VMs stored on a RAID10 ZFS SSD pool has mirrored what we were getting with bare metal. It has only been about a week and we haven't brought up the other 5-10 VMs yet so disaster may yet strike, but for our modest needs it actually seems to be working OK.

XenServer 6.2 has so far tolerated the NFS shares not being available on boot quite well - while I haven't ever tried to access them prior to the fileserver VM being online it hasn't said a single thing about it. I should probably try to access them just to see if it catastrophically breaks prior to the fileserver VM being online.

I'll post back when my experience with XenServer as an AIO is more than about a week and we have some more I/O intensive VMs running. I'm hoping that performance will meet our needs but we'll see!
 
I wonder if they made some changes/fixes? I had the same experience as piglover. Boot up and by the time the FS VM is up, xenserver was declaring the SR as down and needed manual repair to bring back online.
 
I wonder if they made some changes/fixes? I had the same experience as piglover. Boot up and by the time the FS VM is up, xenserver was declaring the SR as down and needed manual repair to bring back online.

OK. After poking and prodding it with hard reboots and whatnot, here is what happens when 6.2 boots and the fileserver VM isn't brought back online:

13708560093_e6f8800e27_b_d.jpg


Click repair:

13708540805_1016d5498a_b_d.jpg


...so not really automatic, but good enough for my use.

One thing that it's NOT doing on it's own now is rebooting or shutting down in a timely fashion - it tries to unmount the NFS shares after the fileserver VM has been shut down and hangs for quite some time as it tries to unmount each NFS share. I've been powering it off via IPMI rather than waiting once it gets to that point, but that's next on the list of things to look into fixing.
 
Current thought process:

Autostart fileserver VM on boot - done
Find the XenServer CLI commands for re-mounting the NFS share
Write a script on the fileserver VM to log into XenServer and remount the shares
Run the above when the fileserver VM boots

...and for shutdown - see if it's possible to write one that logs in and unmounts the NFS shares prior to the fileserver VM powering off.
 
Find the XenServer CLI commands for re-mounting the NFS share
Write a script on the fileserver VM to log into XenServer and remount the shares
Run the above when the fileserver VM boots

...and for shutdown - see if it's possible to write one that logs in and unmounts the NFS shares prior to the fileserver VM powering off.

in regards to 'repairing the SR', you're looking for the 'xe pbd-plug` command. as far as shutdown goes, this can be done with rc.d.

edit:: also, i believe you can prioritize vm startup using vApp tags or start sequence in boot options in the VM properties menu in XenCenter.
 
Last edited:
I haven't used Xen before but some of the guys I work with have used it for running some minor vm's for personal use. From my experience ESXi would work just fine for an AIO setup. Depending on the hardware its run on you shouldn't see much delay in network file transfers between vm's as its all going through the vSwitch first before it hits the hardware switch. And starting with ESXi 5.0 any user could apply for the free license from VMware to activate their system permanently. The only problem was that 5.0-5.1 VMware had put hardware restrictions on the free license that only allowed 1 CPU with unlimited cores and a max of 32 GB of RAM. Once 5.5 hit the market they removed the hardware limitations and only imposed the software (vShpere) limits that prevent you from activating some of the advanced options (live migration and such). You can use the free license on up to 999 physical hosts now but you can only manage them 1 at a time, due to the requirement of a vCenter Server license.
 
Last edited:
OK. After spinning up most of the VMs we need running on here performance is terrible - latency on the dual-mirror SSD zpool is sky-high - so I'm going to record a bunch of benchmarks, move to ESXi 5.5 and see how we go there. I suspect it'll be much improved, based on the fact that our AIO with much the same hardware and 5.1 was great.
 
Back
Top