I just want to do this in the "clean" way.
I'll try to explain better: if I run 'umount /my/raid/mountpoint' and then I use md0 as iSCSI backstore (which should work, I have not tried yet) what happen if I run 'mount /dev/md0 /my/raid/mountpoint' after some time?
So my question is: how can I...
thank you for the explanation, it's very helpful have this stuff explained outside from the technical documents.
my plan is
esxi1+opensm (hca0,ib0) <------------------------> (hca0,ib0) storage
esxi2+opensm (hca0,ib0) <------------------------> (hca0,ib1) storage
so I should have no problems...
to be completely honest I have still some problems to understand how opensm works.
I mean, the theory is "easy and obvious" but in practice I'm not sure how opensm works and what exactly I have configured.
maybe it's normal and I have just to practice more..
about the mtu I was pretty...
good point, at some point I thought the same thing and checked that opensm on the storage node was not running but the problem was something else, as said earlier
HYPERV1 is the name of my VMware vSphere ESXi host, it's not a Microsoft HyperV server :D
the MTU mentioned in the devinfo output is 4096 but in ifconfig ib1 is 2044.
since I'm running IPoIB and I'm a ib newbie I'm not sure what exactly the 4096 in devinfo means, I suppose this is for...
Thank you dasaint, I have solved using older mellanox drivers but I will try on a test node as you suggested. Anyway the inbox drivers needs to be installed anyway? I install them after the mellanox?
If you have time and want to take a look there: http://hardforum.com/showthread.php?t=1846599...