Napp-it - NFS & ESXi 5.5

jim82

Weaksauce
Joined
Nov 1, 2011
Messages
77
Dear community,

Before I repurpose all in one fileserver, I have a few questions for the experts.

I'm currently running hardware RAID10 VMFS with ESXI 5.5. Multiple VM's are used for fileserver, streaming server, etc. I'm finding performance to be very limited on the IO side, even though I'm running with a cache module + BBU.
I wish to use Napp-it All-In-One and pass through my IBM M1015 controller(IT mode) to a single VM to use for storage. Then use NFS as a VM Datastore from that server.

Hardware:
Supermicro X9-SRL
Xeon E5-2620
128GB DDR3 ECC Reg
4 x 3TB WD RED
2 x 128GB Samsung 830 SSD

Questions:

1) If I use VMXNET3 adapter for Napp-it, how will ESXI communicate with the NFS datastore?? Internally in the vSwitch or across my 1Gbit network? I'm asking because I don't want to be limited to 1Gbit speeds on the NFS datastore.

2) Do I need to both ZIL and L2ARC for my pool? I was thinking about assigning 32-64GB RAM to the storage server.

3) Is it a good idea to run a file server in a VM (Windows 2012), with VMDK files located on the NFS datastore or should I use another option?


Thanks for any replies
BR
Jim
 
My brain is still waking up so bare with me if my explanations below are sub-par.

1) If I use VMXNET3 adapter for Napp-it, how will ESXI communicate with the NFS datastore?? Internally in the vSwitch or across my 1Gbit network? I'm asking because I don't want to be limited to 1Gbit speeds on the NFS datastore.

A: The host will communicate to the datastore internally in the Vswitch. The wire speed will only come into play when you start to pull files from it from a client.

2) Do I need to both ZIL and L2ARC for my pool? I was thinking about assigning 32-64GB RAM to the storage server.

A: If you want performance and redundancy, yes.

3) Is it a good idea to run a file server in a VM (Windows 2012), with VMDK files located on the NFS datastore or should I use another option?

A: Yes, this is fine (and preferred by most).
 
What is the bottleneck your having?

a raid card with bbu is going be many times faster writes, than what zfs with a zil will be able to. Unless these are all random writes that are exceeding your cache on the raid card.

As far as reads, if this is your issue, likely zfs with just 32gigs of ram will do it. If you have an insane workload on that box, then, yes adding an l2arc drive can help things along here.
 
You don't necessarily need L2ARC - if your ARC is big enough it may even never be used. I had one for a while but after checking the stats it was almost never touched as the RAM allocated was large enough, so the SSD was used for something else. Check your ARC stats once it's up and running to see how you are going - it may not be necessary.
 
Thanks for your replies, I'll try and post some more info here:

Currently my performance is "good" when only using one or two VM's to access the RAID10, but when I start using other VM's, the disk performance is terrible.

Seems like the RAID10+BBU+Spindles+VMFS doesn't really scale all that well, I could be mistaken, but that's what it seems like.

So I was looking for some other options, ie. ZFS or Storage Spaces.

Although I'm not sure if the ZFS+NFS+VMDK option will be faster?

Any inputs much appreciated.

BR
Jim
 
Here is the issue: esxi uses sync mode for NFS mounts. Without a GOOD SLOG device, your writes will suck dog balls. I would argue for an AIO you can just set sync=disabled for that dataset, since if the host crashes/loses-power, everything will go down, so filesystem corruption is not as much of a concern as if the guest wrote a bunch of data to an external SAN/NAS and kept going, but it got lost because the SAN/NAS crashed. If you do hourly zfs replication and/or snapshots, on the long chance you do get a corrupted guest, it's trivial to roll back. As far as L2ARC goes, I have 16GB of ARC in my external SAN/NAS, and the ARC hit rate is over 90%. I wasted money on two 128GB samsung 840 PRO SSDs for L2ARC which rarely gets touched. My best guess is that the great percentage of accesses hit in ARC (working set), but a lot (most?) of the rest is all over the place so not in L2ARC. I would recommend ZFS+NFS.
 
If you don't need HA, running ESXi from USB and configuring it to pass through physical drives to a VM running FreeNAS is one way you could go. FreeNAS is idiot proof to set up and will do NFS, iSCSI and CIFS natively.
 
True. On the other hand, you would need at least a small sata drive to use as a datastore for the storage appliance, and if so, might as well install esxi on that - make it quicker to boot and what-all...
 
If you already have this set up, try setting sync=disabled on the dataset for your VMs and see if the multi-VM performance improves...
 
At work our ESXi server is running a RAID 10 (3x1TB mirrors, 6 drives total) on a Dell Perc. Performance is really pretty bad for the ~6 VMs that we have running. Not surprised you are seeing something similar.

My home server with an 8x4TB RAIDz2 absolutely destroys the work lab server in disk performance. Which kinda doesn't make sense until you account for how large an impact the ARC has. Both setups are just using consumer grade drives.

Go with a large amount of RAM and you might find performance is good enough to not need to look at a ZIL or L2ARC.
 
Oh dear. Reading the OP, he's using HW raid10 and is *thinking* of switching to ZFS. My bad (and apologies...) Everything else I said, I stand by, if&when he switches :)
 
Thanks for all your replies, still not sure whether I should move to ZFS+NFS yet. But I guess it's work a try, considering I will also get a bit more disk space (RAID10 vs RAIDZ1) out of my 4 drives. At least ZFS is more flexible and scales better than a local RAID controller.

Input is still appreciated.

/Jim
 
Note also that in raid10, zfs will have better random read IOPs in raid10 than raidz*, since it round-robins from both sides of each mirror. From what I know, many HW raid controllers do not do this.
 
I ended up using Nexentastor v3.15 with a 5.5 TB striped mirror(RAID10). Performance was slightly better with Napp-it, but Nexentastor has the better web interface and easier config.

Napp-it is a powerfull tool, if one has the time to learn it. I use it for my much bigger standalone fileserver. But for this specific purpose, Nexentastor was the winner.

Thanks to all for your contributions.
/Jim
 
Back
Top