More info on the hardware i will be working with this summer

yes you need to use the esxcli to bind BOTH your vmkernel ports to the swiscsi HBA. You can use a single vSwitch with both pNICS, but you need to modify the port group to set the respective vmnic active and the other standby.

See here as this is still relevant: http://www.yellow-bricks.com/2009/0...with-esxcliexploring-the-next-version-of-esx/

So you need to bind the vmkernal ports you want to use on the iSCSi subnet or ESXi will always use the vmk0 port? If i only had one NIC then you do not need to do any binding?

I am confused because i see many articles showing the setup done without the binding. Now if i put my iSCSI target and the vmkernal ports on a different subnet then will ESXi automatically use that vmkernal port like it is using vmk0 by default right now? I just want to use the two pNics with failover in case one link goes down.
 
Last edited:
I think it has more to do with getting the multiple TCP/IP sessions going -- you would still have to perform this step even if the vmkernel ports were on entirely separate subnets to get multipathing to work correctly.

Right on about the single physical nic. You need at least 2 for any of this to be relevant.
 
I think it has more to do with getting the multiple TCP/IP sessions going -- you would still have to perform this step even if the vmkernel ports were on entirely separate subnets to get multipathing to work correctly.

Right on about the single physical nic. You need at least 2 for any of this to be relevant.

What exactly is multipathing? I am not sure i need to use it. I just want to use the two pNics I have with failover in case one link goes down?
 
I think it has more to do with getting the multiple TCP/IP sessions going -- you would still have to perform this step even if the vmkernel ports were on entirely separate subnets to get multipathing to work correctly.

Right on about the single physical nic. You need at least 2 for any of this to be relevant.

I think this is what i see going on with these tutorials, they are only using one NIC for the setup, like the one here http://www.techhead.co.uk/vmware-esxi-4-0-vsphere-connecting-to-an-iscsi-storage-target
also the Drobo guide shows the exact same method.
 
What exactly is multipathing? I am not sure i need to use it. I just want to use the two pNics I have with failover in case one link goes down?

Multipathing is exactly what it sounds like. Multiple paths to the same LUN on a SAN. Used for having multiple iSCSI interfaces all active at the same time for increased performance and failover.

VMware uses round robin MPIO, which is why I prefer teamed NFS, but cest la vi! Check here for a pretty simple setup guide

http://defaultreasoning.wordpress.c...g-software-iscsi-round-robin-mpio-on-esx-4-x/
 
Does your iSCSI target have to support Multipathing?

Yes. I understand the drobo is on the low end, but if it has multiple interfaces which are able to write to the LUN (regardless of active/passive) then you could set it up. I don't know anything about that particular storage device.

ESX(i) does not necessarily use round robin. The storage subsystem in vSphere is all plugin based. For your situation I'm sure it is all default with no vendor specific plugins, so that means everything is VMware's NMP (native multipathing plugin). I also highly doubt you'll have to change anything other than the PSP -- if you even need to do that. With that said, you need to make sure you're using the correct SATP (storage array type plugin), followed by the correct PSP (path selection plugin) that is recommended by the vendor. There are 3 default PSPs: Fixed, MRU, and Round Robin (google is your friend). It depends on your array and situation. For instance, we have a high end Hitachi array that supports round robin, but by using MSCS in a supported fashion, the PSP for those Datastores must be set to Fixed instead.
 
Ok i think i finally did it. bear with me......

I put all my iSCSI ports and Drobo on a different VLAN.

I added the Vmkernal ports and did the failover override for 1:1 mapping as instructed so each kernal port has 1 NIC.

I then did the command line stuff:

After you created your VMkernels and bound them to a specific nic you would need to add them to your Software iSCSI initiator:

esxcli swiscsi nic add -n vmk0 -d vmhba35

esxcli swiscsi nic add -n vmk1 -d vmhba35

esxcli swiscsi nic list -d vmhba35

I then went to properties on the iSCSI ESX adapter, put in the ip addresses and it found the Drobo.

Only thing is is that it says it has 4 paths to the target? is that normal?

Am i good to go now?
 
I assume you have 4 controller ports on the drobo? That would make sense to me. So... if one of those paths were to fail, say you unplugged a cable or whatever -- it should fail over to another path OK. What I don't know, is if the drobo has two "controllers"? It wouldn't be odd to see 4 interfaces, 2 for controller "1" and 2 more for controller "2". Usually low end devices like this are active/passive or quasi active/active by using ALUA. You just want to avoid using an unoptimized path that would cause a LUN ownership transfer... that wouldn't help performance at all :).
 
I assume you have 4 controller ports on the drobo? That would make sense to me. So... if one of those paths were to fail, say you unplugged a cable or whatever -- it should fail over to another path OK. What I don't know, is if the drobo has two "controllers"? It wouldn't be odd to see 4 interfaces, 2 for controller "1" and 2 more for controller "2". Usually low end devices like this are active/passive or quasi active/active by using ALUA. You just want to avoid using an unoptimized path that would cause a LUN ownership transfer... that wouldn't help performance at all :).

I don't know, the Drobo has two iSCSI/Ethernet ports on the back.

The steps i did are correct though right?

This is what it looks like.
4hBii.jpg
 
I believe so. Looks active/passive to me. Make sure you use MRU.

ummm what is MRU lol....


I have been copying a huge amount of files back and forth and loaded a test VM up several times and everything seems rock solid.

I noticed only one of the pNics is being used, is that good? if that nic fails the other one will take over?


How does this look for performance?

IPrJx.jpg
 
What is MRU? Didn't you read my previous posts ... :). Most Recently Used. Go to the host configuration, then storage, and then find the datastore you created. Right click on it and go to properties. EDIT: Forgot to add, you gotta click the 'manage paths' button. Then, at the top there will be a drop down... just make sure it is set to MRU...

Yes, if something fails, the other pNIC will take over... correct.

Performance wise, erm. OK :) lol. That doesn't look HORRIBLE, but what would you expect being so entry level. It really is going to depend on your workload if that is going to be enough horsepower or not.
 
Last edited:
I see, it is under Manage Paths, both hosts have it set to VMware Fixed. I think i should just leave it, everything seems fine! lol;)

Should i bother enabling Jumbo Frames?
 
HAHA. Changing to MRU won't break anything. It'll happen without incident, but to each his own :p

Jumo Frames might help, marginally. Jason Boche had a good writeup on it. It takes so much effort to get it working (manually editing vSwitch MTU's etc), and the performance gain... if there is one at all was very small.
 
Jumbo Frames won't help you much at all in this config. Not worth the hassle.
 
Thanks for all the help guys. So far this has been a great learning experience. My first time setting up anything iSCSI.

When i go to enable DRS and HA i will probably have some more questions to fine tune it.

I moved a test VM today with VMotion even though it was not in a cluster and it seemed to work just fine, cool stuff.

Though it is not essential for my project I was teaching myself the Vcenter Converter and when i start the conversion i get the "MethodFault.summary‎" error. I did google this but all i found was that it was fixed in newer releases.:confused:
 
What type/how many drives do you have in your Drobo, are you using single/dual disk recovery, and how did you carve the system into LUNs (and how did you format the LUNs)? Your performance numbers (assuming that is the only thing running), look much lower than what we get off our Drobo. As long as it isn't running multiple VM's (or only one actually wants to touch the drive), we usually get pretty close to wire speed.
 
What type/how many drives do you have in your Drobo, are you using single/dual disk recovery, and how did you carve the system into LUNs (and how did you format the LUNs)? Your performance numbers (assuming that is the only thing running), look much lower than what we get off our Drobo. As long as it isn't running multiple VM's (or only one actually wants to touch the drive), we usually get pretty close to wire speed.

5 drives with the 2 disk failure option. We only have 2.94 TB total with that configuration so I created one large 2TB datastore and formatted it through ESXi.

I hope that was the right way to do it lol.:confused:
 
Last edited:
You really *need* (or at least we do) the full 8 spindles from the box to get decent performance. Turning on double-parity is another performance hit. What formatting option did you use in ESX? If you don't do the largest block size, the array will slow down even more under load. Also, the unit supports thin provisioning, so feel free to make however many LUNs you need to manage your VMs intelligently. You will only use as much space on the disk as you have files, so you don't have to worry so much about sizing. For reference, our Drobo has ~12TB of usable space for the end-users, broken out into 10 2TB LUNs for management. With a single workload going, transfer speed is usually in the range of 100-110 MB/s from the array. This is with 8 2TB Western Digital Black-label drives. The original configuration had WD greens, and performance was abysmal. When multiple jobs queue up, the array chokes on IOPs, and transfer speeds fall through the floor. Try running ~4 VMs with CrystalDisk all going at the same time; you'll see what I mean.
 
Back
Top