ProxMox as ZFS-Host

hotzen

Limp Gawd
Joined
Jan 29, 2011
Messages
349
hello,
has anyone successfully used ProxMox [1] to host a ZFS-host, including pcie-passthrough and decent network-performance?
Are there any major differences to ESXi regarding a SAN-less, one-machine, SOHO installation?

[1] Debian KVM Hypervisor, http://www.proxmox.com/
 
most people use esxi....search the forums for Proxmox you will get some post but not many.
 
Why would anyone want to use proxmox instead of ESXi? Whats the advantages?

After battling for hours to download a bootable image for ESXi from vmware themselves, I figured I'd try a product that wasn't so difficult to acquire. I wasn't to fond of the licensing scheme for ZenServer either, not to mention the fact that it wouldn't work with any of my network interfaces after testing it..

Basically, I don't want to be stuck down the road with a VM server that I suddenly can't manage because the developers decided to make certain aspects of their software proprietary or hard to access. Proxmox looks like it's moving in the right direction.
 
@brutalizer: Proxmox uses KVM, something that is free, OSS and improved VERY VERY rapidly. I love proxmox (although I don't like the 2.x interface very much)....Also: Isn't the new esxi limited to like 8G or 16G ram or something?

With all that being said, I think the big issue with getting zfs working well under proxmox is the hardware passthrough support. Under any Virtaulization solution, you won't get good performance with emulated storage controllers (compared to passthrough).
 
Resurrecting this (relatively) old thread: one advantage of proxmox now is you can install zfsonlinux on it, then there is no emulated disk overhead. With an all-in-one, disk traffic has to pass through network switch (even if it never leaves hypervisor box) to go to/from the NAS/SAN VM. This way, the process on the hypervisor that is doing the work for the guest, can read/write directly the vdk files on the zpool. I tried doing this with openindiana with gui and virtualbox, but virtualbox in that setup just required too much tweaking and hacking and wasn't stable for me. zfsonlinux is going through heavy development and bugfixing right now - might be worth giving this a try...
 
I've been doing exactly what the OP is requesting for about 6 months now. I have one machine as an all-in-one running proxmox with passthrough of storage controllers (IBM M1015 + onboard sata) to a ubuntu 12.04 VM running ZFS on linux as my file server, and i pass through 1 of the nic's to a pfsense VM that i use as my home router. All of the VM's are running on KVM, which i like very much. I have another machine colocated with the exact same setup however it's currently running esxi.

The motivation for running proxmox is KVM, open source, and flexibility. My home machine is a supermicro X8DTL motherboard without IPMI and esxi has no way of monitoring the hardware for failures (fan failures, cpu temp etc.). Even for a home setup not having basic hardware monitoring is a show stopper for me. My colocated machine has IPMI which supports hardware monitoring with email alerts, and i'm able to poll the esxi host with some simple perl scripts to check for hardware alerts as well.

If esxi meets all your requirements it is definitely the more mature product that is likely to cause the least amount of problems, although they did just release 5.1 that completely breaks pci passthrough for certain devices forcing many of us to stick with 5.0 so it is far from perfect. Let me know if you have any specific questions i can help with.
 
bleomycin, have you thought about installing ZOL on the proxmox host and eliminating the middleman?
 
bleomycin, have you thought about installing ZOL on the proxmox host and eliminating the middleman?

I hadn't really thought of it until i read your previous comment to be honest. I'm going to be building another system soon that wil give me something to tinker with during setup so i'll definitely explore that as a possibility. When i setup my current configuration booting from ZoL didn't look like it was worth the trouble and potential headaches, however now i think it may be worth a shot. I'll have to spend some time on the ZoL mailing lists to get caught up to speed.
 
The conversation I was having with another guy (he actually runs virtualbox on OI) was not wrt booting from ZOL (the proxmox trick I was thinking about involves installing ZOL on a proxmox install, but then creating a ZFS pool on a new disk(s) - not trying to mess with proxmox's root/boot FS). His point was that he felt he can get better disk I/O for guests if they don't have to have a network layer involved...
 
hello,
has anyone successfully used ProxMox [1] to host a ZFS-host, including pcie-passthrough and decent network-performance?
Are there any major differences to ESXi regarding a SAN-less, one-machine, SOHO installation?

[1] Debian KVM Hypervisor, http://www.proxmox.com/

I use proxmox (just upgraded from 1.9->2.2 the other day) but without a pci passthrough available box.

I just ordered a xeon x3330 for my Q chipset 775 board so I can try this sometime in the next week or two...

With that being said I'm currently using a physical server for zfs...
 
Why would anyone want to use proxmox instead of ESXi? Whats the advantages?

Proxmox is a fancy debian distro with a VERY nice gui installer & gui web interface. Licensing is free. NO disk, cpu, ram limitations. No "signup-to-download" accounts needed.

It's also very extensible since it's debian and you can just add packages on the host as you need.

it is probably one of the better OSS projects out there.
 
thanks for resurrecting ;)
just evaluating again...
 
Last edited:
I'm thinking of doing a two-stage migration. Step one: shut down all the guest (including the OI VM). Copy the vmdk for OI to another host. Pull the drive with esxi on it. Put in a new drive. Install proxmox. Create an OI VM and copy the vmdk to the proxmox datastore. Enable pass-thru on proxmox and assign the HBA to the OI VM. Start it. Once that datastore is available, import the guests into proxmox. Step two (not sure when that would be): shutdown all the guests, including OI. Unmap the HBA from OI and make it available for proxmox host. Install the zfs tools/modules and import the pool to proxmox. Get all the guests back up in native mode.
 
So much for that idea. The copied over OI raw disk wouldn't even boot. Grub menu, but a reboot loop. OI seems very fussy about HW changes, I've found. So I do a mini-install of OI with all updates and pass through the HBA. Won't even complete booting successfully, so I guess I have to go straight to the end game :(
 
The ubuntu and pbx servers are moved over as kvms and running since last night. The two windows xp are still running on the backup esxi server, but the storage for them is served from the proxmox box. So far so good...
 
Everything has been rock-solid so far. One of the xp VMs is now running - I had to do a repair install to get it to run properly. The last XP is trickier, since it has an encrypted (PGP) hard drive, so the entire 72GB is 'in use' - e.g. will be down longer. It's my wife's work machine, so I have to do this off-hours...
 
two questions:
is there any memory overcommit mechanismn on ProxMox
is there guest cache (speeding up guest VM) on ProxMox

Just read and see tutorail, this is pretty neat, thinking to replace my old esxi with proxmox, the features are promising
 
Well, it's basically just debian. With ZOL on top, however that does memory management for ARC... I ran crystaldiskmark from my xp guest. Write speed 180MB/sec, and read speed several times faster than that :)
 
Hi,

I googled for an All-in-one PROXMOX solution and decided to wake up this old thread again.

I am doing some research on PROXMOX to replace my ESXi 5.1 host as I want to go all-in on open source leaving vsphere behind.

I am researching the possibilities to run PROMOX as a combined virtualization host and ZFS storage server with ZFS storage as local storage, avoiding the more traditional approach of using passthru of a SAN controller to a VM running FreeNAS or similar sharing its storage back to the host.

Has anyone any experience with running ZOL directly on the PROXMOX host?

My initial idea is to have a ZFS mirror for the HOST OS and possibly some key VM's in addition to a ZFS RAID for additional storage to be used by the VMs.

As I said, I am still researching this, so please let me know if my ideas does not make sense, but if so let me also know why it does not make sense :)
 
Hi,

I googled for an All-in-one PROXMOX solution and decided to wake up this old thread again.

I am doing some research on PROXMOX to replace my ESXi 5.1 host as I want to go all-in on open source leaving vsphere behind.

I am researching the possibilities to run PROMOX as a combined virtualization host and ZFS storage server with ZFS storage as local storage, avoiding the more traditional approach of using passthru of a SAN controller to a VM running FreeNAS or similar sharing its storage back to the host.

Has anyone any experience with running ZOL directly on the PROXMOX host?

My initial idea is to have a ZFS mirror for the HOST OS and possibly some key VM's in addition to a ZFS RAID for additional storage to be used by the VMs.

As I said, I am still researching this, so please let me know if my ideas does not make sense, but if so let me also know why it does not make sense :)

I don't have any experience with running ZoL on a proxmox host directly. I don't see any obvious reasons it wouldn't work. ZoL supports debian wheezy, you can start with a fresh wheezy install, add ZoL, then install proxmox: http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy
 
Thank you for the reply,

I will do some more research on ZoL before deciding what to do.
 
Back
Top