a NEW all-in-one storage and vm host option

Zedicus

[H]ard|Gawd
Joined
Nov 2, 2010
Messages
1,337
so i got ran off of the freenas board for trying to share my experience so i will log some of my results here.

this build is running and has been in production for over 6 months. it was not a simple initial config due to the lack of any one really using a similar platform but i would say i have the serious bugs worked out and now i have a simple to maintain easy to use platform.

this is freenas on KVM (proxmox) to make it worse (according to nay-sayers) i am also running on an AMD platform.

hardware:
chenbro RM414 4u 16bay rack case (by far the single most expensive item)
supermicro H8DGI-f MB
2x AMD 6128HE
48gb reg ecc ddr3
2x perc 310 (heavily modified, details latter)
1x adaptec 5405 (not used for any ZFS)
various intel based NICs 1x10g fiber 4x gig-e
6x 3tb constellations in RAIDZ2 single vdev but i can add vdevs as it is in a pool (of 1vdev for now) this is connected to the perc h310s

there are 100 or more possible ways to install a similar config so take this as information to base your system on. not an 'install guide'
due to the hardware i had laying around i used the adaptec 5405 and 4x 300gb 15k cheetahs for hardware raid 5 array. this array has the proxmox install and the VM virtual hard drives. again this was a custom config and proxmox could be installed to USB like a VMware install or any other number of options.

the perc 310 cards function beautifully as passthrough devices even in IR mode. however due to freenas being inside a VM there were some 'gotchas'
http://pve.proxmox.com/wiki/Pci_passthrough
do NOT use the section on pci express pasthrough, it is only relavent to beta kernels where videocard extensions are being passed through. PCI passthrough works fine even when a pci express slot is used.

after the passthrough is configured you will notice the VM that now owns the perc h310 cards will fail to boot with a 'bios failed to load' error. there are instructions on loading the bios into a virtual machine KVM file on the host but this only caused me more headaches. the solution was simple, flash the card to IT mode and blank out (remove) the bios section in the process. there are instructions for this all over the wibble, heres 1.
http://mywiredhouse.net/blog/flashing-dell-perc-h310-firmware/
dependent on bios or EUFI system the process will be slightly different.

once i battled that then the rest of the freenas install went off with out a hitch. (set the virtual cpu to qemu64 mode also) however then i ran into terrible performance after the install was complete. there are some driver issues that kause performance degradation but there are simple work arounds too.
the options are to either passthrough a real intel nic one port from something with this chipset http://h10010.www1.hp.com/wwpc/us/e...2466-64274-3724774-3724775-3724777.html?dnr=2
OR get the current VIRTIO driver installed in freenas, it probably is current now as my install was from over 6 months ago. remember though, the VIRTIO driver is not magic. using it to connect to a junk onboard realtek nic will NOT yield even tolerable performance. i still recomend the aforementioned intel nic even if using the VIRTIO driver. the supermicro board has good intel based onboard nics, my config has 2 onboard, 2 on one of the HP intel nics, and 1 intel fiber nic. all in use for various things.

for those who would like to get very fancy, proxmox does also support openVswitch. i have not dove into that config and at this point i am just passing all of my traffic to a quanta LB4M and letting it sort out who gets what. ALSO the newest proxmox version actually fully supports ZFS so there are even more custom install configurations available now.


now why did i do this? especially since my day job is VMWare management and i have access to a vsphere system i could have piggybacked a home install onto as a 'testing' environment and went on down the road. well the ease of use of proxmox and the ease of use of freenas made web managed VM system and n00b friendly ZFS compelling. plus i work with server gear all day so troubleshooting the performance issues and designing a feature rich build were not challenges. plus these systems are all open platform so getting help for issues is simple. (just dont mention using KVM on the freenas support forum)
 
Last edited:
this is some info on my proxmox utilization. i have 10 virtual servers, 1 is a samba server acting as a domain controller. i have a railo CF server with MySQL. a zoneminder network security camera system. i have a windows 7 desktop that i use with windows RSAT to manage the open source domain controller. and by far out of my 10 VMs (1 of them is XBMC with its gui running in a VM, it only gets about 15fps but i use it to test plugins and folder structure so it works well) the windows desktop by far uses the most resources and still i am not taxing this install. also the freenas system, and a couple of desktop linux installs for testing and management.

i have 6 HTPCs in my house not counting the virtual 1. plus i am using active directory, group policy, mapped drives, and a enterprise domain system. and i have never loaded the system down enough for an htpc do drop frames.
 
Last edited:
just updating this. i have not only been running this system in my home from the initial post date, i have actually deployed a few of these in local businesses. INCLUDING a linux VM with SAMBA4 for full active directory support. the I.T. admins administer the active directory install with windows RSAT utilities. the ease of spinning up servers, and actually proxmox has NATIVE ZFS built in now so it is not even necessary to do the VM with FreeNas. (i am still using my VM with freenas just because i have been too lazy to change it)

anyone still going through the hassle of ESXI free and any sort of storage on top is killing about 10x more time than needed.

also i have moved the proxmox install a couple times, and i am going to move it again soon. my 5405 is serious overkill for what it is doing here, but overkill IS the best kind of kill. proxmox could be installed on a USB thumb drive if you stored the VM's on something else.
 
some utilization shots on a light morning. yes i know my backup array is OVER utilized. it is just a backup of the raidz2 so its not a big concern but i am due to expand storage in the near future. gotta say, i am a VMWare and storage admin for work, if we didn't have that stuff in place when i started, this is the system i would be running. you will note i am also on fairly old versions, again with a year of uptime and considering i do I.T. support all day long, keeping this running system on the bleeding edge has not been a priority.

funny thing is i do have a licensed install of server2012, i do not use it for any domain functionality. DebDC is the samba4 domain controller.

this entire system also gets used as a testing environment for the other various places i do I.T. work for. so i have lots of test operating systems and a 2012 server joined to the domain. i can test anything from how windows shares react to group policy, and binding linux to a domain. this is more than most peoples 'home lab' environment and actually a lot more functional than most 'small business' environments.

1.JPG 2.JPG 3.JPG
 
Last edited:
  • Like
Reactions: DocNo
like this
as i sit here fixing the lamest VMware issue i hav possibly ever seen at work, i figured i would bump this in the hopes of saving even one poor soul from having to deal wit the crapfest that is VMware.
 
vmware, to get a web interface with anything like the usefulness of Proxmox, requires a seperate software install. you might have heard of it, publicly it is known as Vcenter. this thing (only nice way i could put it) has a couple options for install, one being a not terrible VM appliance that is quick to set up and relatively reliable. so of course that is not how it was implemented here. (i inherited this system, it is not one i built) so vcenter here is installed on server 2008, and over the course of a couple upgrades to vmware they have went from SQL 2005, SQL 2008, SQL 2008 R2, and the best part of all now they have it installed with the built in mini SQL. so i tried logging in to any of the VMware consoles yesterday and could get no where via local accounts, domain accounts, web console, vmware cliant install, none of them would talk to the vcenter server. on the bight side all of the VM servers were still up and could connect VIA RDP so i went home for the night. i had a fair idea that it was only the VMware vcenter interface with issues (i could connect to each host independently) and judging by the mess of SQL on the Vcenter server i figured something was wrong there. this morning i get back to seeing what exactly was wrong and sure enough the mini SQL database was full and configured to not grow as needed (we won't even go into all the old databases that are still sitting out there from the previous versions i guess) i purged a couple of gigs of logs out of the Vcenter database, shrunk the amount of stored logs, and tried to enable database growth (doesnt look like the mini SQL database supports that even though it says its turned on) and the vcenter system is now functional again. i debated shutting the windows server off and installing the VM but it is running again now so probably won't.

now, i would like to blame the contractor that built out this to begin with, why are there 4 SQL installs? why didnt they use the appliance? why didn't they do even basic alterations to keep the log file from filling up the small configuration of the DB? but if you think about it really, why is VMware even allowing it to be configured in such a convoluted way to begin with? Really i probably will never know, the support staff still here from when the install was done, have never logged into it. and everyone that used the system has since moved on to other employment. this is the forth VMware environment i have been handed as i have changed employment over the years, and the sad part is it is not the most wonky one i have ever seen.
 
apparently at 357 days the freenas logs fill up causing the WebUI to not open completely. Simple manual wipe of logs fixed things right up, the logs SEEM to be set up to truncate on reboot as needed, however who reboots servers in the *NIX world? Also, there was no effect on the actual file share access or anything like that, due to the drive structure it simply filled up the drive that has logging and http server. (i might move the logging, having it on the same area as the admin utility seems odd, but better than being on part of the ZFS system i guess.)
 
Nice !

Tho this is way too much functionality for what I need it is interesting to see what can be done. I never worked with KVM as I use VMware ever since it came out and I do admit I like it, more than I like Hyper-V but I also do not work as a sole VMware admin as you do, I have to do it all and the depth I can dive into each one is limited.

Endless possibilities and so little time....
 
Nice !

Tho this is way too much functionality for what I need it is interesting to see what can be done. I never worked with KVM as I use VMware ever since it came out and I do admit I like it, more than I like Hyper-V but I also do not work as a sole VMware admin as you do, I have to do it all and the depth I can dive into each one is limited.

Endless possibilities and so little time....

i guess i never specified that VMware admin was only part, out here in the sticks we do not get just one hat. i am also the AS/400 guy, Active Directory guy, and sometimes the network guy...oh and as of about 3 months ago i am also the I.T. Director. Along with that i fairly often get requests to assist other businesses as there is not a lot out here in the way of I.T. so i assembled a modular system with options for anyone. dont need active directory? leave the DC out, already have a NAS, leave FreenNAS out. Proxmox allows more features for free and does have support if the bussiness feels they need to buy a support contract, (at about 10% of a VMWare contract)

i work in one of the largest agencies out here, we even support 5 remote locations that are on our WAN, about a 2 hour drive to the farthest. we have more departments and see more of the public on a day - day basis, and have the smallest I.T. staff. Efficiency is a requirement. This place has enough stuff tied directly to VMware that it will not be moved soon, but i am working to install a Proxmox box to help spread the load and maybe start a slow migration.
 
vmware, to get a web interface with anything like the usefulness of Proxmox, requires a seperate software install. you might have heard of it, publicly it is known as Vcenter. this thing (only nice way i could put it) has a couple options for install, one being a not terrible VM appliance that is quick to set up and relatively reliable. so of course that is not how it was implemented here. (i inherited this system, it is not one i built) so vcenter here is installed on server 2008, and over the course of a couple upgrades to vmware they have went from SQL 2005, SQL 2008, SQL 2008 R2, and the best part of all now they have it installed with the built in mini SQL. so i tried logging in to any of the VMware consoles yesterday and could get no where via local accounts, domain accounts, web console, vmware cliant install, none of them would talk to the vcenter server. on the bight side all of the VM servers were still up and could connect VIA RDP so i went home for the night. i had a fair idea that it was only the VMware vcenter interface with issues (i could connect to each host independently) and judging by the mess of SQL on the Vcenter server i figured something was wrong there. this morning i get back to seeing what exactly was wrong and sure enough the mini SQL database was full and configured to not grow as needed (we won't even go into all the old databases that are still sitting out there from the previous versions i guess) i purged a couple of gigs of logs out of the Vcenter database, shrunk the amount of stored logs, and tried to enable database growth (doesnt look like the mini SQL database supports that even though it says its turned on) and the vcenter system is now functional again. i debated shutting the windows server off and installing the VM but it is running again now so probably won't.

now, i would like to blame the contractor that built out this to begin with, why are there 4 SQL installs? why didnt they use the appliance? why didn't they do even basic alterations to keep the log file from filling up the small configuration of the DB? but if you think about it really, why is VMware even allowing it to be configured in such a convoluted way to begin with? Really i probably will never know, the support staff still here from when the install was done, have never logged into it. and everyone that used the system has since moved on to other employment. this is the forth VMware environment i have been handed as i have changed employment over the years, and the sad part is it is not the most wonky one i have ever seen.

Yea I really don't see the benifit to running the Vcenter install on a windows OS. WHY do this? Ease of troubleshooting I suppose. Fear of command line maybe? We've gone vcenter with the appliance setup for our own little 5 host setups. They run well...

though I am having one use with the Update Manager not registering on one of my Vcenters. You haven't seen that have you? Just curious.
 
no, but when i have used the appliance it is easy to replace and see if that is the issue. just shut the one down with the issue and spin up another and see if it works. if it does delete the old one.
 
the vcenter appliance was (is) not capable of patching the vmware-hosts on the fly. The vcenter windows installation is the only one that could patch the ESXi v5.0 and v5.1 hosts. Also there are some vcenter integration plugins that also need a windows to integrate into vcenter. (Netapp)
 
I'm running ESXi 6.x and I havent seen anything stating that limitation in my limited purview.
 
actually i do know WHY it was originally installed on a windows host. the DR site replication system did not function on the appliance (a long time ago) but it has been available for a while now even in the appliance.
there are still some plugins that will only function on vcenter on a windows host, but thankfully this is VERY limited now.
 
I cannot understand why you would want to run FreeNAS on Proxmox. Proxmox can use ZFS natively so why bother with the FreeNAS?
 
I cannot understand why you would want to run FreeNAS on Proxmox. Proxmox can use ZFS natively so why bother with the FreeNAS?

when i started using proxmox it had no ZFS support. as in NONE, not even in the experimental branch. as such, my system is designed around adding ZFS with as much user friendliness as possible. also if you look at the design of how ZFS is implemented on proxmox it is designed to hand ZFS space to the host. so moving my ZFS drives over to proxmox would not be straight forward, as my ZFS system houses media and user shared drives, it is all set up to be accesed directly via things like CFS, NFS.

considering the HOW still makes freenas on proxmox a valid argument, but it depends on what you want to do with the ZFS. it would be just as easy now to have the ZFS at the root, and simply spin up a windows or linux server and do all the sharing of data from that. For the ease of client accessible ZFS, not much beats FreeNAS. (there is OmniOS of course but the freenas forum told me virtualising freenas could not be done and i like to prove people wrong.) remember the original post was 2 years ago, and by the time i was making the post i had been running the system for a while, i would say this system has been in place 4 years and has had VERY minimal changes and or reconfiguration.
 
if someone is planning a brand new build, try the 5.0 beta. it has some very advanced new features. it takes some major work to pull off an upgrade from 4.4 though so i recommend it for new builds not upgrades at this time.
 
adding this here as it can be used as AD authentication for your home network. so even if you use the VMware and _Gea all in one solution you could still use this AD DC for a single sign on solution for all the authentication stuff.

https://hardforum.com/threads/buidling-a-working-debian-ad-dc.1956959/

this is an updated version of my own DC that i have been running since the initial post. (and i have deployed this system with the DC to several sites and even have some windows admins managing the systems successfully)
 
Never saw this thread but as a newbie to this stuff, I ended up giving both ESXi and Proxmox a try as VM host and storage. Can confirm that Proxmox is the way to go for a home user who doesn't need to learn VMWare for work. I chose it for Debian under the hood and native ZFS for the host OS and VM datastore. I didn't bother with a Freenas VM and just created my storage pool directly in Proxmox. Grabbed Samba with apt-get and it's working beautifully to host files for my Windows machine. I max out my gigabit network at 113MB/s sustained -- next is figuring out LAN teaming on both ends.

Also using a pfsense VM on the same machine as my router and has been ridiculously stable so far.
 
Back
Top