Which ZFS OS + VM for System

LogicWater

n00b
Joined
Nov 5, 2010
Messages
14
Hey all!

I just got a new system (1090T + GA-890FXA-UD5 +8GB ram +2x1TB + 6x 2TB EARS) and the primary purpose of this system is to be a ZFS filesystem and secondary hosting a few VMs.
People on various HF threads mentioned using FreeNAS, FreeBSD or OpenSolaris. The idea is to install the main OS on either a usb stick or a PATA HDD. Then 2x1TB used for VMs and the 6x 2TB EARS for media files. What would be a goood choice for the ZFS OS (6x2TB EARS)?
Would it possible to use VirtualBox on the above OS and install ESXi 4.1; create the VMs in ESXi and connect them to the original 2x1TB HDD on the main OS using iSCSI?
 
ESXi is a barebones hypervisor. You cannot install it on an operating system.

You would need to install FreeBSD and then use Citrix/VirtualBox on Free BSD for your other VMs.
 
Would it possible to use VirtualBox on the above OS and install ESXi 4.1; create the VMs in ESXi and connect them to the original 2x1TB HDD on the main OS using iSCSI?

Time to simplify this... a LOT.

1. Install ESXi
2. Make a VM with either NexentaCore, OpenIndiana, FreeBSD, or FreeNAS.
3. Add other VMs as you want.

I would not think about installing an OS, then installing VirtualBox, then installing other OSes.

You may, however, want to double check your system's ESXi compatibility first (probably would have been good before the motherboard purchase) and make sure everything is supported.

My guess is you have some pretty hardy VMs you want to run since that is a lot of CPU for a ZFS server, especially using so few and "green" drives.
 
Time to simplify this... a LOT.

1. Install ESXi
2. Make a VM with either NexentaCore, OpenIndiana, FreeBSD, or FreeNAS.
3. Add other VMs as you want.

I would not think about installing an OS, then installing VirtualBox, then installing other OSes.

You may, however, want to double check your system's ESXi compatibility first (probably would have been good before the motherboard purchase) and make sure everything is supported.

My guess is you have some pretty hardy VMs you want to run since that is a lot of CPU for a ZFS server, especially using so few and "green" drives.

Doing it this way requires passthrough of the individual disks to Nexenta, etc... so that ZFS can run on top of them. Is there an ability yet in using ESX to pass through an entire controller or controllers to a VM so that all disks that connect to that controller natively appear in the guest VM without having to do individual disk pass through?
 
Doing it this way requires passthrough of the individual disks to Nexenta, etc... so that ZFS can run on top of them. Is there an ability yet in using ESX to pass through an entire controller or controllers to a VM so that all disks that connect to that controller natively appear in the guest VM without having to do individual disk pass through?

Yes, if your CPU and motherboard support it. However, ESXi will not be able to use this controller, so any local storage or ESXi install itself must be on a separate controller.



 
Last edited:
Alright, so your saying if I plan on geting a proper network controller card like
Intel EXPI9402PT 10/ 100/ 1000Mbps PCI-Express Dual Port Server Adapter 2 x RJ45 (http://www.newegg.ca/Product/Product.aspx?Item=N82E16833106014CVF)
so the installation will properly work.
However, once ESXi is successfully installed on the system, I would not be able to use the onboard SATA controller to create ZFS 6x2TB EARS?
 
You mention 8GB RAM, I've seen a few threads that say giving 8GB ram to ZFS is a good thing. Perhaps trying to do too much on your system may cause performance issues down the road?

Are there others running VM's + ZFS on 8GB at the moment? Just curious.
 
I've tried a bunch of different ZFS related OS (FreeNAS, NASFree, OI, SmartOS, OmniOS, ZFS Guru, EON, Nexenta (free), Openfiler, etc...). Overall they all seem to work just fine and you can get them installed very quickly. The difficulty is after the install. At the end of the day I went with OmniOS (with Napp-it as the gui). Not as polished as FreeNAS and it's obvious that napp-it has some english translation issues but even being a non-linux guru I quickly had my RAIDZ2 pool and shares setup and everything in no time at all and it's been chugging away with no touches from me for almost 2 months.

I was equally impressed with SmartOS (documentation seemed more spotty at the time), NASFree (I preferred it to FreeNAS) and EON. Nexenta ran R E A L L Y slow on an HP Microserver (internal usb) but it is IMHO the most polished. OI is fine but I didn't want a full blown OS for a NAS storage device. I could have been quite happy with most of the solutions I mentioned (especially since I ended up running the OS on a 64GB SSD) but Omni-OS was the last one I tried and it was so easy and worked so well I stopped there.
 
Time to simplify this... a LOT.

1. Install ESXi
2. Make a VM with either NexentaCore, OpenIndiana, FreeBSD, or FreeNAS.
3. Add other VMs as you want.

I would not think about installing an OS, then installing VirtualBox, then installing other OSes.

You may, however, want to double check your system's ESXi compatibility first (probably would have been good before the motherboard purchase) and make sure everything is supported.

My guess is you have some pretty hardy VMs you want to run since that is a lot of CPU for a ZFS server, especially using so few and "green" drives.




I'd go with most of this, except the VBox thing. VBoxHeadless in single-instance Solaris zones runs surprisingly fast and light, and the networking rages. You wouldn't think a virtualizer in a virtualizer would run so well. Go figure.

I'd say if you're like OP and want multiple VMs on ZFS, you should give Solaris 11.1 desktop and VBox in Zones a whirl. S11 is the VBox platform par excellence.



I may get some flak on this, but nowadays graphics passthrough the primary reason for a home user to wall himself into a T1 hypervisor for ZFS and a few VMs. Especially if he knows even a little bit about servers, he can go a lot of places with Solaris desktop with zones running VBoxHeadless.

Main drawback for OP (and it's not for his hardware) is that Solaris doesn't run fershit off USB. A USB rpool sorta defeats the purpose of ZFS on bare metal anyway. Moreover, rpool best practice is at least a ZFS mirror.
Then again, there's no inherent need to run separate drives for rpool when you're only running 1 OS. And it's effortless to move file systems and zones as you attach new disks and create new vdevs and pools. # zoneadm -z move
(With Op's disks and assuming everything equal, the simple play mirror the 2x1TB for rpool and /export/VBoxZones, and RAIDz2 or 2xRAIDz the 6 other drives of mass storage. Then I'd iSCSI target LUNs if I needed to, but I wouldn't need to. ;) )



OP is probably running his Windows machines and remote clients on bare metal desktops as thick clients now, if he's running file servers. No real life changes there, unless he wants to.


Advantages are OP's NAS/SAN will be running natively on bare metal, with no hypervisor deciding what gets passed. He'll be running an uber advanced cloud networking stack that will let him create vnics and bridge multiple VBoxes on straight subnet addresses easier than creating vlans over hardware nics. He'll have strong documentation. He'll have less of a need to run his VBoxes when he gets his head around Zones. And everything will be on ZFS. Plus, you don't really need a NAS front end when you have a full G2 desktop you can RDP or VNC into.

The "Cloud in a Box" or "Solaris Mothership" strategy has some real strong points, and should be considered alongside the "All-In-One". They're both great, and they're both free for OP's uses.

:cool:
 
Last edited:
You can run Solaris off a usb stick:
http://smartos.org/
This Solaris distro, seems to be wicked. Many of the Solaris kernel hackers that defected from Oracle, is behind Smartos. For instance the DTrace team. ZFS people, etc. It seems that Smartos has the greatest Solaris knowledge outside Oracle
 
Back
Top