Zedicus
[H]ard|Gawd
- Joined
- Nov 2, 2010
- Messages
- 1,337
so i got ran off of the freenas board for trying to share my experience so i will log some of my results here.
this build is running and has been in production for over 6 months. it was not a simple initial config due to the lack of any one really using a similar platform but i would say i have the serious bugs worked out and now i have a simple to maintain easy to use platform.
this is freenas on KVM (proxmox) to make it worse (according to nay-sayers) i am also running on an AMD platform.
hardware:
chenbro RM414 4u 16bay rack case (by far the single most expensive item)
supermicro H8DGI-f MB
2x AMD 6128HE
48gb reg ecc ddr3
2x perc 310 (heavily modified, details latter)
1x adaptec 5405 (not used for any ZFS)
various intel based NICs 1x10g fiber 4x gig-e
6x 3tb constellations in RAIDZ2 single vdev but i can add vdevs as it is in a pool (of 1vdev for now) this is connected to the perc h310s
there are 100 or more possible ways to install a similar config so take this as information to base your system on. not an 'install guide'
due to the hardware i had laying around i used the adaptec 5405 and 4x 300gb 15k cheetahs for hardware raid 5 array. this array has the proxmox install and the VM virtual hard drives. again this was a custom config and proxmox could be installed to USB like a VMware install or any other number of options.
the perc 310 cards function beautifully as passthrough devices even in IR mode. however due to freenas being inside a VM there were some 'gotchas'
http://pve.proxmox.com/wiki/Pci_passthrough
do NOT use the section on pci express pasthrough, it is only relavent to beta kernels where videocard extensions are being passed through. PCI passthrough works fine even when a pci express slot is used.
after the passthrough is configured you will notice the VM that now owns the perc h310 cards will fail to boot with a 'bios failed to load' error. there are instructions on loading the bios into a virtual machine KVM file on the host but this only caused me more headaches. the solution was simple, flash the card to IT mode and blank out (remove) the bios section in the process. there are instructions for this all over the wibble, heres 1.
http://mywiredhouse.net/blog/flashing-dell-perc-h310-firmware/
dependent on bios or EUFI system the process will be slightly different.
once i battled that then the rest of the freenas install went off with out a hitch. (set the virtual cpu to qemu64 mode also) however then i ran into terrible performance after the install was complete. there are some driver issues that kause performance degradation but there are simple work arounds too.
the options are to either passthrough a real intel nic one port from something with this chipset http://h10010.www1.hp.com/wwpc/us/e...2466-64274-3724774-3724775-3724777.html?dnr=2
OR get the current VIRTIO driver installed in freenas, it probably is current now as my install was from over 6 months ago. remember though, the VIRTIO driver is not magic. using it to connect to a junk onboard realtek nic will NOT yield even tolerable performance. i still recomend the aforementioned intel nic even if using the VIRTIO driver. the supermicro board has good intel based onboard nics, my config has 2 onboard, 2 on one of the HP intel nics, and 1 intel fiber nic. all in use for various things.
for those who would like to get very fancy, proxmox does also support openVswitch. i have not dove into that config and at this point i am just passing all of my traffic to a quanta LB4M and letting it sort out who gets what. ALSO the newest proxmox version actually fully supports ZFS so there are even more custom install configurations available now.
now why did i do this? especially since my day job is VMWare management and i have access to a vsphere system i could have piggybacked a home install onto as a 'testing' environment and went on down the road. well the ease of use of proxmox and the ease of use of freenas made web managed VM system and n00b friendly ZFS compelling. plus i work with server gear all day so troubleshooting the performance issues and designing a feature rich build were not challenges. plus these systems are all open platform so getting help for issues is simple. (just dont mention using KVM on the freenas support forum)
this build is running and has been in production for over 6 months. it was not a simple initial config due to the lack of any one really using a similar platform but i would say i have the serious bugs worked out and now i have a simple to maintain easy to use platform.
this is freenas on KVM (proxmox) to make it worse (according to nay-sayers) i am also running on an AMD platform.
hardware:
chenbro RM414 4u 16bay rack case (by far the single most expensive item)
supermicro H8DGI-f MB
2x AMD 6128HE
48gb reg ecc ddr3
2x perc 310 (heavily modified, details latter)
1x adaptec 5405 (not used for any ZFS)
various intel based NICs 1x10g fiber 4x gig-e
6x 3tb constellations in RAIDZ2 single vdev but i can add vdevs as it is in a pool (of 1vdev for now) this is connected to the perc h310s
there are 100 or more possible ways to install a similar config so take this as information to base your system on. not an 'install guide'
due to the hardware i had laying around i used the adaptec 5405 and 4x 300gb 15k cheetahs for hardware raid 5 array. this array has the proxmox install and the VM virtual hard drives. again this was a custom config and proxmox could be installed to USB like a VMware install or any other number of options.
the perc 310 cards function beautifully as passthrough devices even in IR mode. however due to freenas being inside a VM there were some 'gotchas'
http://pve.proxmox.com/wiki/Pci_passthrough
do NOT use the section on pci express pasthrough, it is only relavent to beta kernels where videocard extensions are being passed through. PCI passthrough works fine even when a pci express slot is used.
after the passthrough is configured you will notice the VM that now owns the perc h310 cards will fail to boot with a 'bios failed to load' error. there are instructions on loading the bios into a virtual machine KVM file on the host but this only caused me more headaches. the solution was simple, flash the card to IT mode and blank out (remove) the bios section in the process. there are instructions for this all over the wibble, heres 1.
http://mywiredhouse.net/blog/flashing-dell-perc-h310-firmware/
dependent on bios or EUFI system the process will be slightly different.
once i battled that then the rest of the freenas install went off with out a hitch. (set the virtual cpu to qemu64 mode also) however then i ran into terrible performance after the install was complete. there are some driver issues that kause performance degradation but there are simple work arounds too.
the options are to either passthrough a real intel nic one port from something with this chipset http://h10010.www1.hp.com/wwpc/us/e...2466-64274-3724774-3724775-3724777.html?dnr=2
OR get the current VIRTIO driver installed in freenas, it probably is current now as my install was from over 6 months ago. remember though, the VIRTIO driver is not magic. using it to connect to a junk onboard realtek nic will NOT yield even tolerable performance. i still recomend the aforementioned intel nic even if using the VIRTIO driver. the supermicro board has good intel based onboard nics, my config has 2 onboard, 2 on one of the HP intel nics, and 1 intel fiber nic. all in use for various things.
for those who would like to get very fancy, proxmox does also support openVswitch. i have not dove into that config and at this point i am just passing all of my traffic to a quanta LB4M and letting it sort out who gets what. ALSO the newest proxmox version actually fully supports ZFS so there are even more custom install configurations available now.
now why did i do this? especially since my day job is VMWare management and i have access to a vsphere system i could have piggybacked a home install onto as a 'testing' environment and went on down the road. well the ease of use of proxmox and the ease of use of freenas made web managed VM system and n00b friendly ZFS compelling. plus i work with server gear all day so troubleshooting the performance issues and designing a feature rich build were not challenges. plus these systems are all open platform so getting help for issues is simple. (just dont mention using KVM on the freenas support forum)
Last edited: