XenServer direct disk access

Could be done as long as you pass through all the involved SATA controllers using PCI pass-through and then do RaidZ on the VM itself.
 
Thanks!

Anyone know if passing SATA disks directly as shown in the link is an ok idea? For home use this is, nothing major.
 
Never tried it with RaidZ, it is not strictly speaking a complete pass through as Dom0 is still aware of the disks. I've not tried disk passthrough with Xen Server / XCP, it is easy to do with KVM or Open Source Xen.

I suspect it will work and as long as you have Xen IO drivers installed on OmniOS (Or maybe kernel has the integrated rivers enabled) then performance should be OK. Performance would still likely be better with PCI Passthrough of a complete storage controller or SR-IOV virtual device but by the sounds of it you just want to know if it will work. I'm sure some one who knows with some certainty will answer soon enough.
 
Yes it will work. Xen has supported I/O passthrough for quite some time. It's easier to enable than KVM. I like KVM I use it a lot but Xen handles it better.
 
I don't remember the link but there IS a way to pass individual sata drives to a xenserver guest (google and you might find it.) I ran into a limit of 6 drives showing up in the list of 'removable drives' (in the gui) though which was annoying.
 
I don't remember the link but there IS a way to pass individual sata drives to a xenserver guest (google and you might find it.) I ran into a limit of 6 drives showing up in the list of 'removable drives' (in the gui) though which was annoying.

I have been trying to google it all day but haven't found anything (except the link above, making it look like an external device). Anyone else?

Thanks everyone so very much. :)
 
Thanks for your replies! They are very helpful. :)

I looked into it and my hardware doesn't support IOMMU (HP ProLiant MicroServer N40L). The question therefore is; what is the best way to forward the disks to a VM for ZFS without IOMMU (PCI passthrough)?

Ubuntu, ZFS on Linux and XCP on top of that? https://help.ubuntu.com/community/S...Installing_Xen_.28XCP_-_Xen_Cloud_Platform.29

Well that changes the conversation completely. You are not going to pass through anything without IOMMU/VT-D. Then what I would do is try NFS first. It works well on ZoL.
 
Whoops. I should have read that link far more carefully. It's doable.
 
Here is why I gave up on xenserver/xcp on zfs. Apparently the dom0 cannot be 64-bit, which was a huge show-stopper for me. xenserver (apparently) doesn't even *have* a 64-bit dom0. I tried install ubuntu in 64-bit flavor and build/install xcp on that, but it either wouldn't build or it croaked at startup (don't remember which). The closest I saw was a blog from a couple of years ago where someone was trying to do all of this from scratch, but nothing after that. Forgive the vagueness, this was all last year and I kinda let it fade, since it wasn't relevant anymore...
 
Here is why I gave up on xenserver/xcp on zfs. Apparently the dom0 cannot be 64-bit, which was a huge show-stopper for me. xenserver (apparently) doesn't even *have* a 64-bit dom0. I tried install ubuntu in 64-bit flavor and build/install xcp on that, but it either wouldn't build or it croaked at startup (don't remember which). The closest I saw was a blog from a couple of years ago where someone was trying to do all of this from scratch, but nothing after that. Forgive the vagueness, this was all last year and I kinda let it fade, since it wasn't relevant anymore...

That's weird. It's been a while for me too, but I never had a problem with addressing memory well over the 32-bit limit. Now I did compile it myself maybe that's why.
 
I am not all that savvy about some of those packages, so I can't say why it failed, but it did in a big way, so I gave up (for now at least...) You actually had a 64-bit dom0 working? With more than 4GB of ram?
 
I am not all that savvy about some of those packages, so I can't say why it failed, but it did in a big way, so I gave up (for now at least...) You actually had a 64-bit dom0 working? With more than 4GB of ram?

Yup. I tested that and KVM and I ended up going the KVM route because of several benefits.There are directions some where on how to compile the kernel yourself though. If I find them I'll post it. I am actually going to do Xen when I rebuild my main workstation. I want to passthrough video to Windows guests for gaming purposes. I'm sure I could do it on KVM, but just passing through the NIC was a PITA. I got it working but man... it wasn't easy.
 
I then tried a guest with passed-thru device but failed two different ways: hw passthrough didn't work because the lame bios the hvm guests use would blow up on the HBA's option rom. I also tried pvm passthrough but xen doesn't support more than 3.5GB or so of ram (the memory conflicts with pci space). I gave up...
 
You can get it up and running using proxmox (qemu kvm)

Instead of giving the virt machine a virtual HDD...

Just pass thru the entire HDD

eg
asign /dev/sdd direct to the virtual machine.

better still, to avoid mix ups

pass thru the /dev/disk/by-id/ number ..... so as the HDD's dont get mixed up when you add more drives at a later stage.

<snip>
How can I assign a physical disk to a VM?

You don't have to do anything at host level (i.e. not add to fstab or anything), just set is as available directly to the KVM guest:

qm set <vmid> -ide# /dev/sdb

Or:

qm set <vmid> -ide# /dev/disk/by-id/[your disk ID]

...since having the drive letter change (should you add a drive) might have unintended consequences.

Also see /etc/qemu-server/<vmid>.conf if you want to add it editing the conf file by hand (i.e. adding ide1: /dev/sdb2). After that you can run the VM as usual, and you will have the new storage device available inside it. Beware that you can't assign it to more than one running VM if the filesystem is not designed for such scenario.
</snip>

from here
https://pve.proxmox.com/wiki/Troubleshooting

No need for VT-D with proxmox <Wink Wink>

I have tried it, and it works well

Made a windows virtual
Assigned it a /dev/disk/by-id
installed windows

shutdown proxmox....
poped the drive in a REAL windows machine
Files are all available.
Copied a file onto the HDD
popped it back into the proxmox box
fired back up the Windows virtual

all was good and the newly placed file was there...

didn't seem to worry it at all.

When I get time I will try out a Solaris virt and give it some full HDD's to make a pool.
Export the pool and see if it comes up on a real Solaris machine ok or not.

.
 
Well that changes the conversation completely. You are not going to pass through anything without IOMMU/VT-D. Then what I would do is try NFS first. It works well on ZoL.

I think I might try iscsi first.
 
I then tried a guest with passed-thru device but failed two different ways: hw passthrough didn't work because the lame bios the hvm guests use would blow up on the HBA's option rom. I also tried pvm passthrough but xen doesn't support more than 3.5GB or so of ram (the memory conflicts with pci space). I gave up...

Here you go.
 
Sorry if I was unclear. My original intent was to try to get xcp or xenserver working with 64-bit dom0. The problem is not xen per-se, but the xcp and xenserver toolkits not being 64-bit. I did a quick skim of the article you linked (thanks!), but it looked to me like straight up xen, not xenserver or xcp?
 
Sorry if I was unclear. My original intent was to try to get xcp or xenserver working with 64-bit dom0. The problem is not xen per-se, but the xcp and xenserver toolkits not being 64-bit. I did a quick skim of the article you linked (thanks!), but it looked to me like straight up xen, not xenserver or xcp?

Nope my bad I missed that from your earlier post. I looked just now and the best I could find was this...

(Read till the end)

It looks like when it comes to XCP and XS they prefer you to pass through just the targets not the whole HBA.
 
Last edited:
Back
Top