Another ESXi storage server! (ZFS + WHS)

evt

Limp Gawd
Joined
Apr 6, 2006
Messages
253
Heres a quick rundown:
Xeon X3440, 4gb ECC (for now), 8x2tb LP Seagates (those new 4k sector drives with 64mb cache), Supermicro X8SIA-F, 4x Br10i's

Being new to ESXi, it had to fool around it until I got it to do what I want. For now, I want to migrate my data from the old WHS machine to the new virtualized WHS machine.

I passthrough'ed the integrated Intel SATA controller for dedicated WHS VM use, and the installation looked like it proceeded like a hitch, until I realized that it doesnt actually BOOT any drives off the passthrough intel controller (although I was surprised that it didnt greet me with the usual Intel Matrix BIOS thing in the VM but the setup still found the drive attached to it)

Is there a way to force a VM to make it boot off of the passthrough controller? I really do not want to use a virtual disk for booting, that would be an extreme disappointment because I wont be able to pull it out and natively read the data off of it if something bad happens.

Here are some pictures to stare at:
serverbuild1.jpg

serverbuild2.jpg

serverbuild3.jpg
 
i am a little confused about what you want to reach with your config.

some thoughts

usually you virtualize a os to have it hardware independent, to have
snapshots to go back and to move it easily to another machine in case of
problems.

pass-through is a way to have full hardware access (mostly usb, storage controller
and nics) for already virtualized machines. its needed especially if you want to
virtualize a SAN OS with direct attached native formatted high speed storage (ex. ZFS SAN).

if you want to explore a virtualized VM, you can add the virtualized disk always to another Installation.

With ESXi, you must virtualize WHS like any other OS. You can either store this VMs on a
local datastore (uncomfortable, slow access, only a few ultraslow snaps possible) or
on a specialized SAN storage system - either on a second machine or virtualized on the same
ESXi server like i do it with my ZFS-All-In-One concept.

I would not build a All-In-One with ESXi and WHS as a SAN storage to deliver
embedded storage to ESXI via NFS or iSCSI.
(too big, too slow, no comfortable snapshots, not the data security like ZFS, too much "needed security updates")

->

1. do not use ESXi, use WHS + something like VMware workstation or

2. use ESXi + local datastore only or
use ESXi + specialized SAN OS (second machine or embedded, virtualized)

use the first option, if you mainly need WHS
use the second option, if all VM's are as important (use 8, better 16 GB RAM)

Gea
 
Last edited:
i think you need to explain what you're trying to accomplish... you list zfs and whs in the title, but no details about how you want to implement zfs or what/how you want to use whs...

if you want to use zfs as storage for esxi, then that's a completely different setup from just using a virtualized zfs as a storage server...

you need someplace to store the guest vm files... either a local store or a network store using iscsi or nfs. you can't boot directly off a hard drive... you have to create a virtual hard drive and store your os on that. or you can boot off a san if you have that setup...
if you're converting your local whs install to a vm, then you would pass your controller/hard drives through and use ur whs vm to access them directly. no zfs used...
you would use a local store to store the vm files so you would install a new instance of whs to the vm, then pass your drives through for it to pickup. part of virtualizing is to reduce the something bad happening. if the server dies, you can import the files to another vmhost and have the os unaffected. no os rebuild required and you can start up the server right where it left off...
since esxi is free, you can easily have it setup on another usb drive and be up and running on a compatible machine in minutes...

and local datastore is fine for storing files... i dont know what Gea means by uncomfortable and slow, but it's a step easier than using a nas and doesnt require extra setup. the internal sata drives can be faster than you're gig network, but that depends on your setup and it's still easier for someone who's learning about esxi.

but clarify what you want to do so we can give better advice
 
i am a little confused about what you want to reach with your config.

i think you need to explain what you're trying to accomplish... you list zfs and whs in the title, but no details about how you want to implement zfs or what/how you want to use whs...

but clarify what you want to do so we can give better advice


Sorry if the initial post sounded dry, the browser crashed while I was typing up the thread and half-assed it through the explanation.

Some additional specs:
Norco 4020
Antec Trupower-Trio 430w
The bracket-less BR10i’s are still in IR-mode firmware, haven’t flashed yet.
Seagate ST2000DL003 (all 8 drives went thru testing without a problem, although some of the SMART values went up/down)


Initially I wanted to build an all-in-one server: pfSense, untangle, ZFS, and WHS (or Vail), so I assumed that ESXi is the way to go. This would replace my existing router and provide extra redundant storage for my family and friends. (Extra redundant because I am still a bit wary of those Seagates and/or the HBA failing on me, so that’s why I wanted to build ZFS + WHS initially to re-duplicate my data and see what would happen in a long run). I will still be using WHS due to file sharing simplicity for the family, as well as its automated computer backups.

I wanted to virtualize WHS because my old WHS machine is a bit flaky in terms of hardware and software stability (as well as inefficient in power), and this new server would provide me more processing power with less electricity usage. The WHS VM will only use the Intel 6-port SATA controller via passthrough, and maybe a minimum-sized virtual system disk to minimize the amount of real data that does in the .vmdk file. The WHS drives will consist of my spare 2x 1.5tb Seagates + some other odd drives left.

For ZFS, I am between sub.mesa’s ZFSGuru and _Gea’s napp-it + OpenIndiana, although in theory I can do both and see which is more comfortable although I would need to greatly increase the amount of ram in the server, but that can wait. The ZFS VM will again use a minimum-sized virtual system disk then it gets 2 of the BR10i’s as passthroughs. The ZFS array will be RaidZ2 with 6x 2tb disks with the other 2x 2tb disks in cold spare; 4 drives on one BR10i’s and the other 4 drives on the other BR10i’s just for giggles.
 
I am confused!

What exactly do you feel WHS will solve that a ZFS based OS won't ?

Your setup is strange :) Please enlighten us!
 
Just a few suggestions:

1. Don't virtualize your router. I think you're setting yourself up to a real problem. Just my opinion though. Atleast don't end up in a situation where a routine maintenance of this machine means there is no available internet in the house... Look at http://www.pcengines.ch/ for alternatives :)

2. WHS is a PAIN to virtualize from what little experience I have with it (just installed it once in my ESXi server and it took hours). I think the whole thing is a piece of crap - but it's even worse when virtualized (driver issues). An average Solaris Express + napp-it install is WAY easy and should solve most of what WHS does.

3. Windows 7 has included backup. Do you actually have windows clients that isn't using Win7 ?
 
The ZFS array will be RaidZ2 with 6x 2tb disks with the other 2x 2tb disks in cold spare; 4 drives on one BR10i’s and the other 4 drives on the other BR10i’s just for giggles.

Sorry! I can't help myself and need to append one more comment... I haven't seen many people here using cold spares. And two of them in addition to a raidz2 isn't the best way of using your total resources perhaps? I would consider a three way mirror setup using 9 disks if you're looking for high redundancy and speed - or just use 10 drives in a raidz2 if you want to get high capacity (16TB) and still a lot of protection. The difference in real risk between a 6 and a 10 disk raidz2 isn't as large as you might think.
 
WHS works fine if you pass through the controllers and the drives. Then all the drives are managed and setup from WHS. so you can pull the drive and stick it in any windows computer to pull data off. and I always have cold spares for my production, important stuff...

you're trying to do too much... essentially you're trying to do some testing and figuring out what works for you, while trying to run a "production" WHS for sharing files to the family...

so list your requirements - what you must have, what you want, and ultimately, what you want the end result to be. You can virtualize untangle and pfsense, but you have to logically and physically figure out how to separate that and how you want your network to look and function. also, remember, if you take that one server down, everything will be down at once... can you live with that?

you should probably figure out what you want from ZFS and WHS and figure out how to implement that in your network. They both do different things, but offer some similar features...

first off, 4GB is too little for all you want to do... even if you do 1GB a VM, ESXi still needs some for overhead... i suppose you could do 512MB for untangle and pfsense, but you're still lacking in memory and if you want to do ZFS, you would want lots more ram...

you can google around and check out other peoples setup for a virtualized WHS... a small vhd for the os is all you need if you pass the drives through...
 
After a long hiatus, I am resuming my this server build. I stopped a while ago due to school and work, but mainly because I cannot get the damn things to work properly.

I have read all the very useful replies to the thread and decided to keep it simple for now: ESXi with openSolaris + Napp-it, and maybe WHS down the line if I really feel bored. I now have a PCI 4-port SATA raid controller that will be used for the ESXi datastore drives, leaving the Intel SATA controllers available for passthrough (for WHS I suppose).

The main problem is that my BR10i's doesnt feel like detecting all the drives it is connected to. Restarting the server or doing a power cycle will make it detect the other drives, but then forgets to detect the previous drives, regardless of the timeout values I set the BR10i firmware to. Is this the result of hardware incompatibility or do I really need to flash them to IT-mode?

I dont believe the hard drives itself are failing, since once in a blue moon, it does detect all of my eight 2TB Seagates. I did a preclear and passed with a good bill of health (no UltraCRC errors or the like)

The other exotic reasons I can think of is that my SFF-8087 Forward cables are really flaky (ebayed and they are of EXTREMELY thin build), the miniSAS connections on the BR10i are not fully functioning, the brackets for the controllers are for needed stability(?), or that the Norco 4020 SATA backplanes are giving out. (Are the miniSAS backplanes from similar Norco chassis compatible with the 4020?)
 
Back
Top