FreeNAS - Temporary Storage for VMware ESX Cluster

hotcrandel

Gawd
Joined
Feb 26, 2010
Messages
781
I have an ESX Cluster with three hosts running HA, and I want to shuffle around some stuff on my storage, but I'm a little short on free-space.

I've run into some file locking issues where only a single hosts can RW any particular datastore with another much older storage product, causing issues. This may have been a limitation of the storage product (an older HP SAN) or user error.

I was wondering if anyone has added FreeNAS iSCSI or NFS storage to a VMware cluster with good results.

In particular I'm wondering if FreeNAS can tolerate multiple iSCSI or NFS sessions, for each member of the cluster without file locking issues.

Are there any recommended strategies (make a new data store per VM?, bigger stores with multiple VM's), iSCSI vs NFS, etc.

The hardware I will be using is probably a Core2 Era Xeon with 16/32GB RAM, 7200RPM or 10k enterprise disks and SSD caching, shared as a ZFS volume.

The VM's I'm backing up are generally dev environments and are already backed up with Veeam.
 
I'd be most concerned about the Random R/W performance of ZFS on FreeNAS with 3 hosts hitting it, especially as nested vdevs aren't supported by the GUI. And do your research on NFS and FreeNAS. Write performance sucks unless you either disable ZIL or move it to fast SSD. Without knowing more about your scenario, I'd say stick with iSCSI for the MPIO benefit.
 
NFS > iSCSI, the BSD target is pretty badly mucked last I checked for iSCSI, but it's been a VERY long time since I've tried it. NFS side works fine if you set async writes and it's only going to be temporary.
 
I used iscsi on nas4free for 2 years with no problem on my last setup. Was all over 1gbe links. Wasn't the fastest but it worked and was reliable.
 
OpenFiler?

i just set up a test openfiler with NFS but only have 1 VM running so far so really can't comment on performance.
 
I used Freenas temporarily it seemed to be ok. I think the big thing is having enough of a box to run it on. BTW, I also temporarily mounted on a ESX host. iScsi still worked.
 
iSCSI always worked. Till it didn't, the locks got scrambled, and it corrupted the shit out of the datastore. It didn't handle reservations (or ATS) in a way that we expected, so things got messy.
 
I've used FreeNAS for the last few years. Recently I tried iSCSI storage mapped to both ESXi and a physical Windows 7, both would have hang issues on startup. VMs would take forever to load, and my Windows 7 physical box would boot to desktop, but explorer would freak out for 10 minutes while it resolved whatever issue it was running into.

I have since moved my VMs to a NFS share on my FreeNAS box and have had no issue. Using CIFS for my Windows Shares
 
Don't know how the iSCSI would compare on the new in kernel target compared to the older istgt?
 
My experience mirrors bds1904's. Ran FreeNAS 8.0.3 through 8.3 for 3ish years using a pair of 1G connections to my ESXi 5.0 cluster in my home lab. I've since moved on, but the software was stable for me. Played with both NFS and iSCSI, and found the iSCSI to be smoother over all with MPIO going. Granted I wasn't using ZFS at that point, 12 x 750s on a 3Ware RAID card.

lopoetve, did the testing point to the iSCSI target itself of a combination of iSCSI and ZFS?
 
I've used FreeNAS for the last few years. Recently I tried iSCSI storage mapped to both ESXi and a physical Windows 7, both would have hang issues on startup. VMs would take forever to load, and my Windows 7 physical box would boot to desktop, but explorer would freak out for 10 minutes while it resolved whatever issue it was running into.

I have since moved my VMs to a NFS share on my FreeNAS box and have had no issue. Using CIFS for my Windows Shares

I'm a little confused... how did you get a single iSCSI target to mount on two systems without even more issues? I'm assuming in Windows it was an NTFS device, which isn't a cluster aware storage. Can you elaborate for me?
 
Well, he never actually said 'a single target', just 'iscsi storage'. Could have been multiple targets (luns?) on the same array...
 
Ah. Might be a reading comprehension issue on my part. I read the "I tried iSCSI storage mapped to both ESXi and a physical Windows 7..." as to mean one target. My mistake!
 
My experience mirrors bds1904's. Ran FreeNAS 8.0.3 through 8.3 for 3ish years using a pair of 1G connections to my ESXi 5.0 cluster in my home lab. I've since moved on, but the software was stable for me. Played with both NFS and iSCSI, and found the iSCSI to be smoother over all with MPIO going. Granted I wasn't using ZFS at that point, 12 x 750s on a 3Ware RAID card.

lopoetve, did the testing point to the iSCSI target itself of a combination of iSCSI and ZFS?

Definitely the iSCSI target. It had not obeyed the reservation/release properly, allowing the locks to be reset wrong, resulting in on-disk corruption.

I'm a little confused... how did you get a single iSCSI target to mount on two systems without even more issues? I'm assuming in Windows it was an NTFS device, which isn't a cluster aware storage. Can you elaborate for me?

Lots of arrays do storage by LUN, which means the target is the array (or controller) itself and not the actual storage volume. :)

Ah. Might be a reading comprehension issue on my part. I read the "I tried iSCSI storage mapped to both ESXi and a physical Windows 7..." as to mean one target. My mistake!

See above. Could be one target, different luns. All depends on how BSD presents storage, which I honestly don't remember.
 
Back
Top