ESX as a file server

kohl

Limp Gawd
Joined
Apr 5, 2004
Messages
290
Greetings! Hoping to get some opinions here regarding a planned NAS project.

I am currently running a Core2Quad box, with Win2008 as the host OS. The box has two RAID1 arrays (ICH10) and serves as my file server. This box also runs VMWare Server w/ 4 VMs for a little home lab environment (domain controllers, etc... nothing intensive).

The problem is, I am stuck from a file storage standpoint because the motherboard's SATA ports are filled. It is also very inconvenient and painful to have to reboot the host OS, as it interrupts any streaming media and takes down all the VMs.

I have considered building a separate box *just* as a file server. However, I would like to avoid a second server sitting in the basement. As part of some brainstorming, a consideration was to purchase an ESX supported RAID controller, a larger case, a bunch of 2TB drives, and then ditch the Win2008 host OS and go with VMWare ESX. I could then create one Raid array for the VMs, and a second large Raid5 or Raid6 array that would serve as file storage. I could then configure that array as a disk in ESX and let a VM do the file serving.

Would this be a viable file server option, or would I be looking at major performance/IO issues doing this through VMs? I can always go the route of a second box and do Openfiler or something, but again, 1 box doubling as ESX and file server would be great.

Thanks in advance!
 
Basically, you want to virtualize your NAS. I can only say i recommended against it, but then i'm not that at home in the virtualization thing. It would mean you tune everything to suit the needs of VMware, which tends to be very picky. FreeBSD also doesn't seem to play nice with virtualization; you may have very poor performance. What will you be running as NAS OS anyway?

The advantage of this is that you only need one box. The alternative would be a dedicated NAS and simply run VMs or jails on there with your services running; except the other way around. That means no performance issues, no conforming to ESX demands and generally the most problem-free setup.

You should decide on NAS OS first; that will also decide whether you need a RAID controller or a HBA like the SuperMicro USAS-L8i which packs 8x SATA/300 in two mini-SAS cables.
 
While I have dabbled a bit with Linux and Openfiler, I am a lot more comfortable with Windows when it comes to file servers. I would probably stick with Windows 2008 or perhaps check out Windows Home Server v2 when it comes out.
 
If you are running 2008 why are you using VMware server?
Hyper-v is much better than vmware server IMO.

Hosting a NAS inside of a vmware vm is unadvisable because you cannot do direct disk access using server/player/workstation unless you are on 2003/xp.
ESX you also cant do direct disk access and you have to format using VMFS.

I would advise against the SuperMicro USAS-L8i because it is designed for a proprietary UIO expansion slot that is only found on Supermicro systems so it will not mount properly.

You could get something like a Supermicro AOC-SASLP-MV8 which is essentially the same thing but uses the standard pci express expansion slot.
 
MV8 uses another chip and is quite buggy and doesn't work well on FreeBSD at least. It also has half the bandwidth of the USAS card, while being only slightly cheaper.

The L8i is supported in both FreeBSD and OpenSolaris, making it the best choice controller for use with ZFS. Since it only has internal connectors, the slightly different bracket shouldn't pose any problems at all. Screw loose the screw holding the bracket and you insert it right into PCI-express x8 slot.

I recommend choosing on a NAS OS first, before choosing the hardware. The other way around could mean you end up with something that doesn't work well with the software chosen.
 
MV8 uses another chip and is quite buggy and doesn't work well on FreeBSD at least. It also has half the bandwidth of the USAS card, while being only slightly cheaper.

The L8i is supported in both FreeBSD and OpenSolaris, making it the best choice controller for use with ZFS. Since it only has internal connectors, the slightly different bracket shouldn't pose any problems at all. Screw loose the screw holding the bracket and you insert it right into PCI-express x8 slot.

I recommend choosing on a NAS OS first, before choosing the hardware. The other way around could mean you end up with something that doesn't work well with the software chosen.

But he already said he doesnt want to use *nix and he will use windows.

Plus the card is no way limited by the PCIe x4 bus. You will not be pushing 1GB/s with any setup unless he is using all SSDs, but even then SATA will hold him back long before the x4 bus will.
 
In his TS he also mentioned 'something like OpenFiler' as an alternative to the VM solution. If that's the direction he will go, the USAS controller is highly recommended.

If it's windows-only i would still check if the MV8 has any serious bugs that spoil your fun; all i hear are problems from this controller.

And 1GB/s with 8 ports is not that hard to reach; even 5400rpm disks would be able to bottleneck the PCI-express interface; though only on sequential transfers at the outer tracks; and likely no issue since its going over gigabit. But still; i feel the USAS controller has several advantages over the MV8. And actually i prefer the heatsink to be on the 'up side'; heat wants to travel up not down. Actually i feel UIO the correct standard, and normal slots are flawed in design as all the heatsinks will be on the down side; that's why some graphics cards use all kind of heatpipes to get the heatsink on the up-side again.

Either way, his prime choice would be to VM or not to VM; and of not to VM then what to run. I think he should focus on that first, and then make a choice on hardware instead.
 
Before you go any farther down the path of making your own ESXi box and if you haven't already, be sure to read up on creating your own whitebox host. There are some caveats with doing this as hardware support is quite limited, especially with NICs. http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php is a list I used to spec out my machine at work. If your hardware isn't compatible or if the additional cost is prohibitive I would go for a stand alone solution tailored for NAS work as others have suggested.

Moving along with the VMware option, if the 4 other VMs aren't going to be doing anything intensive and the file server is only being used for media streaming and/or data backup duty it might not even be worth the headache of making two separate RAID arrays managed by ESXi for this. A single, large hardware RAID array with multiple spindles should provide more than enough performance and space for what you want to do and will make resource allocation through vSphere and troubleshooting much simpler.

The ESXi server I have set up is a bit of a sandbox testing lab like yours, there are 3 VMs locally hosted on the ESXi box and it is also attached to an NFS datastore for any further expansion. Those 3 local VMs are being hosted on a single physical disk, not even a RAID array. Just did a quick and dirty windows file transfer from a hosted VM to a separate physical box and nearly saturated the gigabit link between the two before it finished, I saw it peak at 95MB/s. If you have any sort of striped array you'll likely run out of network bandwidth before you run out of HD I/O bandwidth is what I'm getting at here. Unless you're trunking a couple of NICs together and need >125MB/s speeds this setup would suit you just fine. In my opinion it's always best not to over complicate things.
 
Honestly, Hyper-V and use WHS or another R2 guest as your file serving OS(es). You can directly pass disks to VM's in 2008 R2 running Hyper-V, and pass virtual disks if you want. If you are running mostly windows gues OSes, Hyper-V works great.

And WHS V2 works great in Hyper-V. I would strongly caution waiting on putting production data on V2 at this point.

As for hardware, what's your budget? You can do what you want very easily, but you can also spend a lot or a little bit.
 
I would switch to Hyper-V R2 or Windows Server 2008 R2 and run 2008 R2 as your file server. Hardware wise, just add a decent hardware RAID controller, drives, and attach that entire array to your file server. Keep your ICH10R arrays for boot and VMs.
 
Here's what I'm using:

Tyan S5380 w/2 quad core Xeon L5410
16 Gigs of RAM
Arcea 1680ix RAID controller
HP SAS Expander
Two 150 Gig Raptors in RAID 1 for OS (these are recycled from a server two generations ago, one died under the tail end of the warranty and WD sent me one of those new 2.5 inch Raptors. The other one mysteriously died too and awaiting replacement from WD.)
Six Seagate 1TB ES.2 hard drives in RAID 6 (also from my previous server. Had 18, sold off all but these six. Used for my data and Hyper-V VMs)
Fourteen 2TB Hitachis in RAID 6 used to store my movie collection. This array spins down when not in use

Windows Server 2008 R2 is installed on this box and the Host OS is used as a file server, Windows Server Update Services, Windows Deployment and Hyper-V host. It is a member of a domain as well. In Hyper-V I run VMs of two domain controllers and an Exchange server. Reason why I do it this way is I rather not have all my stuff stuck in a .VHD file should something happen. I could have the VM access a drive directly but it just seemed overly complex than it had to be, especially since I wanted to make sure certain arrays spun down when not in use. Now the server boots up before the DC starts but it's not a big deal. It just throws a warning in the event log and everything is fine once the DC starts. The VMs are set to start in a specific order to avoid issues with Exchange services not starting due to a DC not being available. Also, my friend has his own version of this and we use Sonicwall's with a VPN tunnel between our "sites" so our domain forest spans both sites just in case one goes down.

As long as your hardware supports virtualization and since you're familiar with Windows OSes already, Hyper-V is more than capable in doing what you want.

As for reboots, the only time that happens is for patches. Using group policies, the domain controllers, exchange and whatever else member servers/workstations reboot as necessary for updates pushed out by Windows Server Update Services at a predetermined time. A slightly different policy is applied to the Hyper-V host servers where the patches come down ready to be installed but never reboot. Every so often I will log in, shut down the VMs, install updates and reboot the physical machine. I do that maybe every other month during a pre-determined green zone (IOW, when the wife is asleep since rebooting the server would affect the Media Center PC). My friend isn't as proactive, partically because he also runs a SageTV server directly on his Host OS. He's gone about 4 months without a software update/reboot.

Just put a little planning and creativity in and you should be able to run everything you need on your current box.
 
Last edited:
Thanks for all the awesome feedback and info.

With regards to budget, I want to stay under $2k. One of the most agonizing decisions is whether to go with hardware RAID, because that alone blows a $500-$600 hole in the budget. However, due to the fact that I want future expandability, I am leaning heavily towards it.

I honestly need to give Hyper-V another look. We run ESX exclusively at my job, and so I wanted to immerse myself in that to better familiarize myself with the product. However, in the process I have not done much with Hyper-V and should probably consider that as an option.

The immediate issue I am facing with my current hardware setup is lack of storage expandability. All of the SATA ports are in use, including the eSATA port to which I have a 2TB drive which backs up the local OS and also a few Windows 7 media center computers.

Those of you with large capacity home NAS boxes, what do you have to say on the topic of a server case? I was looking at this as an option:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811192058
I really would rather not have a rack mount case sitting around, but if needed, I can go down that road.

Also, and I admit this may sound like a silly question, but for current RAID controllers (Areca, LSI, etc) how do they interface with the hot swap bays on a case to let you know if a drive has failed, etc. Is that something that has to be compatible between the case and the RAID controller?
 
I have a Lian-Li 343B cube case right now. It has eighteen 5.25 inch bays (nine on each side). I use six 5 to 3 hotswap bays (gives you five 3.5 inch hotswap bays per three 5.25 inch bays) which gives me 30 hotswap bays for hard drives. The server has room for six more 3.5 inch drives as well, but I'm not currently using them.

I'm actually looking into ditching this case and going with a rackmount case. I'm waiting for Norco to release their 4224 4U rackmount case which has room for 24 hotswap bays. Three of these cases with an HP SAS expander in each will give me 72 bays for hard drives in 12U of space (or about 24 inches of vertical height). It is actually more space effective to use rackmount cases than the Lian-Li, not to mention the Lian-Li really isn't designed for EATX motherboards like the one I'm using right now. Or the more costly alternative is to go with a Supermicro 846 or 847 series case. These are in the $1000+ range but you get all the diagnostic LEDs and health monitoring features not found on the cheaper cases. Plus, the Supermicros have an optional SAS expander backplane so no additional SAS expander needs to be purchased.

The RAID controllers use an SFF-8087 to SATA break out cable. One SFF-8087 interface turns into 4 SATA connectors which is what is on the back of those hotswap bays I have in my current case. Now, if you get a SATA RAID controller, you'll need to get one with enough ports for the amount of bays you want. If you get a SAS RAID controller (like the Areca 1680 series or the 3ware 9690 series), it will have two SFF-8087 ports that can hook directly to 8 SATA drives. If you plug in a SAS expander into the RAID controller, it will multiply that SFF-8087 port to six SFF-8087 (give or take depending on the model SAS expander) ports (or 24 SATA ports) and you can daisy chain multiple SAS expanders to reach the ~128 drive limit of most SAS RAID controllers. In order to get the fault lights, enclosure temperature and all the cool stuff, you need a backplane that supports SES or I2C which none of the budget cases/hotswap bays have. You'll need to go the more expensive Supermicro cases for that. Easy way to deal with that is just label each hotswap bay with the serial # and/or the logical bay number the controller gives it. If a drive fails, just cross reference the serial # or bay # the RAID controller's management software gives you to the label on your hotswap drive.
 
Quick answers:
1. Hyper-V is really easy to set up for any windows guest OS. Have you ever installed virtual PC in windows 7? It is basically that simple. Hardware wise, almost anything you buy is going to have Windows Server 2008 R2 support, so with Hyper-V that is not a huge issue.
2. I would strongly urge you not to do a consumer case and stuff hotswap bays into it. With the Norco 4U's being sub $400, they are really hard to beat.
3. Labeling drives/ bays can also be done at the port level so that is a bit of extra work, but it is about 30 minutes worth of time + some upkeep. Not a ton if you are organized.

Hardware recs after the game tonight :)
 
Okay, here's the only way to virtualize NAS, IMO:

-CPU supporting virtualization extensions
-Motherboard supporting IOMMU (VT-d for Intel) - some Q35, Q45, X38, X48, X58 boards - Xen has a partial list of supported boards/more info
-Hypervisor that supports PCI passthrough (Xen does and I think VMWare ESX supports it for only up to 2 devices)
-Dedicated PCI-E RAID card

Pass the RAID card through to a guest OS. It will not be virtualized and should not incur too much of a performance penalty. You can also pass through NICs if you need the most performance.
 
Okay, here's the only way to virtualize NAS, IMO:

-CPU supporting virtualization extensions
-Motherboard supporting IOMMU (VT-d for Intel) - some Q35, Q45, X38, X48, X58 boards - Xen has a partial list of supported boards/more info
-Hypervisor that supports PCI passthrough (Xen does and I think VMWare ESX supports it for only up to 2 devices)
-Dedicated PCI-E RAID card

Pass the RAID card through to a guest OS. It will not be virtualized and should not incur too much of a performance penalty. You can also pass through NICs if you need the most performance.


May be a dumb question, but why all that for Hyper-V? You can pass Raid volumes through to Hyper-V VM's by putting them in "Offline" mode under the Server 2008 R2 box. Performance penalty is basically nothing (that you would notice with less than 6-8gbps of network bandwidth.)

Management of the Raid happens either in Server 2008 R2 or if you are using Areca in the out-of-band management.

No need to pass-through a PCI device as you can attach, say a Raid 6 volume, through the Hyper-V VM as a SCSI disk. Works really well for Windows guests. Actually, .vhd passthrough is not THAT bad either so long as you are not using dynamically expanding volumes and you don't have say three 500GB volumes on one 2TB drive all getting accessed by different VMs.

Easy disk pass through is one thing I love about Hyper-V over ESXi.
 
So I downloaded the free Hyper-V Server 2008 r2 from Microsoft and gave it a little spin. My first impression is that unless you are in a domain environment, it is really a pain in the @ss to do anything with.

The free version is basically the core installation of 2008 r2, so you perform the install and get dumped to a command prompt. I didn't want to put it on my domain as I was just messing with the install on a spare laptop. I had to do some research into the various commands to disable the firewall, set up remote management, etc. Again, I realize that this is all due to security issues but I'm just saying.

With ESXi, I was up and running, and able to start setting up VMs in about 10 minutes. I'll confess to being a n00b when it comes to Hyper-V, but for people getting into the VM thing, from my perspective, VMWare products are a lot easier to manage.

Sorry I know its beginning to be a derail from the original topic, and I will certainly do a little more investigation, but from the perspective of building a bare metal VM box, I have a hard time believing that even a core windows server install has a lighter resource footprint than ESXi.
 
hyper-v server and server2008 w/ hyper-v are totally different animals.

Server 2008 w/ hyper-v has more features and is easier to use.
 
I agree; I was speaking more from the perspective of comparing the free offerings. If someone goes and grabs a free hypervisor, it is a tough sell for Hyper-V Server, IMHO (again, in a lab/home environment where you just need to stand a box up and start building VMs). I still want to give Hyper-V a fair shake, so I will see how it runs in a domain environment.
 
Last edited:
If someone goes and grabs a free hypervisor, it is a tough sell for Hyper-V Server, IMHO.
I agree with that.

I was just confused because from your OP it seems as if you already owned 2008, so hyper-v role would already be free/paid for as well.
 
You are correct; I could always add the role. I will certainly have to contemplate doing that :) One thing I did not mention is that I use VMWare workstation on my main PC, since you still cannot run 64bit guest OSes on Virtual PC :( I just need to give Hyper-V more of a look.

How would you compare the resource utilization/efficiency of VMWare Server vs. Hyper-V?
 
I use hyper-v to run all my servers and i run vmware workstation on my desktop.

I really like both, but for certain things such as giving a VM direct access to your raid without having to run an unsupported tool or create virtual hard drives hyper-v just really shines.

VMware workstation is great for my desktop but i would not want it on my server over hyper-v or even ESX for that matter, just because workstation gives you stuff like high quality graphics, multi-display, etc. that is really only useful on a desktop.

VMware Server i think is the worst product ever.
 
I really like both, but for certain things such as giving a VM direct access to your raid without having to run an unsupported tool or create virtual hard drives hyper-v just really shines..

Except that Xen, Linux KVM, ESX and ESXi can all do that, and Xen at least can even pass through PCI-E video cards.
 
Maybe ESX4 can now, but with ESX3.5 it didnt

I dont really see KVM as a viable alternative for someone looking to manage windows guests compared to any of those others.

Xen can do PCIe video cards for free? That sounds like something worth looking into if thats the case. I dont know Xen
 
Tough one here... but sound you're making it more complex than needed (includion of ESX). At my old job, I deployed a Win2k3 server (on a RAID 5 array) as a file server and host VMWare Server to run the other services. Shares were mapped to the VMs. Very rarely the host machine (file server) was rebooted, (MS updates every quarter). The VM's could reboot all they wanted.

Granted VMware Server 2.0 was a step down in certain respects (if you used 1.x), but for some dumb reason I dealt with it. I haven't kept up to date with VMware happenings, but that machine I deployed is still running well and I haven't had a call in a while (last call was a BSOD on a Win2k VM over 6 months ago).

If I recall ESX compatible RAID cards are quite pricey.
 
The more I think about it, the more I am leaning towards a two separate boxes; one for VM (ESX/Hyper-V/whatever), and the other strictly as a file server. The file server would have a decent RAID controller, and then if I go with ESX, I can keep the VMs on NFS file shares.

I will probably now go haunt the Areca owner's thread ;)
 
The more I think about it, the more I am leaning towards a two separate boxes; one for VM (ESX/Hyper-V/whatever), and the other strictly as a file server. The file server would have a decent RAID controller, and then if I go with ESX, I can keep the VMs on NFS file shares.

I will probably now go haunt the Areca owner's thread ;)

That's fine, but you can run a VM, pass a RAID card through with VMDirectPath, and I think you can have a data store for your other VMs run from that VM. You would just have to set up a delayed start for the other VMs. Obviously you'd want to run VMWare from a RAID-1 which you would want a second RAID controller or, in some cases, possibly onboard RAID for.

Up to you. You've got multiple options.
 
Back
Top