nas4free iscsi question regaurding target/ portals

Chandler

Limp Gawd
Joined
Jul 7, 2013
Messages
136
I have a SSD and a SAS pool setup on nas4free to give me a cheap storage solution for vsphere 5.5. I have ZFS volumes in both pools and configured as targets in the ISCSI setup.

I have four hosts - when I connect the datastore to one, the other hosts will not mount it. They see the device, but will not mount the datastore. I have to use SSH to mount it. When I move over to the second host I see this message:

Code:
~ # esxcfg-volume -l
VMFS UUID/label: 54ee4445-87ee479e-85ed-00a0d1eaa278/ssddatastore
Can mount: Yes
Can resignature: No (the volume is being actively used)
Extent name: t10.FreeBSD_iSCSI_DISK______NFSN00MZOI0YQB__________________:1    range: 0 - 716543 (MB)

The Can resignature: No (the volume is being actively used) means it is currently mounted elsewhere.

I have never actually built a cluster before because I have never had the licensing. Is this a ISCSI issue because it can only have one connection, or is this another issue?

I am not sure if this should be here in the data storage forum or virtualization but I thought I would try here first. As I understand it, I can have the same datastore accessible across multiple machines so that should not be a problem. I originally put it on one then set up the cluster but the cluster gives errors because there are not datastores configured on the other hosts.
 
You set up a target volume for every connection.
2 can not share a target volume at the same time.
 
If you want to share a folder to multiple computers you have to use nfs or cifs not iscsi.
 
You set up a target volume for every connection.
2 can not share a target volume at the same time.
So I can only share the LUN with one machine?

If you want to share a folder to multiple computers you have to use nfs or cifs not iscsi.

There is very limited performance with NFS + NAS4FREE with ESXI. This is a bummer.

Is this not the way enterprises set up (iSCSI typically)? I thought iSCSI did everything at the block level...
 
I have a SSD and a SAS pool setup on nas4free to give me a cheap storage solution for vsphere 5.5. I have ZFS volumes in both pools and configured as targets in the ISCSI setup.

I have four hosts - when I connect the datastore to one, the other hosts will not mount it. They see the device, but will not mount the datastore. I have to use SSH to mount it. When I move over to the second host I see this message:

Code:
~ # esxcfg-volume -l
VMFS UUID/label: 54ee4445-87ee479e-85ed-00a0d1eaa278/ssddatastore
Can mount: Yes
Can resignature: No (the volume is being actively used)
Extent name: t10.FreeBSD_iSCSI_DISK______NFSN00MZOI0YQB__________________:1    range: 0 - 716543 (MB)

The Can resignature: No (the volume is being actively used) means it is currently mounted elsewhere.

I have never actually built a cluster before because I have never had the licensing. Is this a ISCSI issue because it can only have one connection, or is this another issue?

I am not sure if this should be here in the data storage forum or virtualization but I thought I would try here first. As I understand it, I can have the same datastore accessible across multiple machines so that should not be a problem. I originally put it on one then set up the cluster but the cluster gives errors because there are not datastores configured on the other hosts.

Your lun presentation is not identical between systems - most likely because you have different target port groups or the like. That means the lun signature (a hash of several values) differs between hosts, so the first one that saw it and formatted it is the only one that can access it. Put them in the same initiator group, and all the luns in the same target group, and you should be good - you may have to resignature after taht (which is a bummer and takes some downtime) to equalize out, but you can absolutely do what you're trying to do.
 
So I can only share the LUN with one machine?



There is very limited performance with NFS + NAS4FREE with ESXI. This is a bummer.

Is this not the way enterprises set up (iSCSI typically)? I thought iSCSI did everything at the block level...

Enterprises use block/nfs/fc/etc interchangeably, depending on their needs. Each has advantages and disadvantages.

The reason NFS performance is horrible is because ESX uses FILE_SYNC for writes, due to how the virtual machines perceive storage and acks from the array. File_sync on ZFS is horrendously slow most of the time, unless you've tuned the living daylights out of the ZIL SLOG and the transaction queue.
 
Your lun presentation is not identical between systems - most likely because you have different target port groups or the like. That means the lun signature (a hash of several values) differs between hosts, so the first one that saw it and formatted it is the only one that can access it. Put them in the same initiator group, and all the luns in the same target group, and you should be good - you may have to resignature after taht (which is a bummer and takes some downtime) to equalize out, but you can absolutely do what you're trying to do.

I am definitely using the exact same configuration down to the hardware. Now the IP addresses on the hosts are different though. I wanted to give each hosts it's own dedicated route but it is in the same target group and portal. I assumed this would not be an issues though because they are just paths.

Ie host 1:
172.16.0.50
172.16.1.50
Host2:
172.16.0.51
172.16.1.51
Host3
172.16.2.52
172.16.3.52
Host4
172.16.2.53
172.16.3.53
(Those are the IPs for the vSwitch interfaces on each of the iSCSI networks, all switches are tagged with the VLAN 300 ID)
 
Correct - each host should have it's own IPs. It's something on teh storage side that is "Differing" between hosts. I've not used NAS4Free in a VERY long time, so I'm going off of what the NappIT/FreeNAS/etc terms are for what I'm looking for.

Do this, on the esx hosts:

vmkfstools -V
cat /var/log/vmkernel.log | grep -i signature

Send me the output of each host. It may be that all 4 are complaining, maybe not :)
 
Back
Top