I'll begin by apologizing a) for my ignorance and b) if this is a little more networking forum than vm'ing. That out of the way, I'm looking to have a zfs nas/san that will host datastore(s) for 2 esxi hosts that are sitting in the same rack (so distance isn't an issue) in my home lab. I'm trying to find a network solution faster than standard 1gbps as it would choke at certain times - the most economical path that is suited to my needs I've seemed to find is 4gbps FC.
Which brings me to my main question - if I were to drop a 4port pci-e hba like the QLE2464 into the zfs box, could I run multiple direct lines to esxi hosts and have functioning shared storage, or am I missing functionality without a fc switch? While I wouldn't mind getting the fc switch, if it doesn't make a big difference I'd happily hold off until there was a more pressing need.
Also, in either case, does bonding/teaming work well (or at all) in this scenario to provide increased one-to-one throughput, i.e. if there were 2-port hba's in the esxi hosts, could I dual-link 8gbps transfers for each host?
Thanks for any help guys!
Which brings me to my main question - if I were to drop a 4port pci-e hba like the QLE2464 into the zfs box, could I run multiple direct lines to esxi hosts and have functioning shared storage, or am I missing functionality without a fc switch? While I wouldn't mind getting the fc switch, if it doesn't make a big difference I'd happily hold off until there was a more pressing need.
Also, in either case, does bonding/teaming work well (or at all) in this scenario to provide increased one-to-one throughput, i.e. if there were 2-port hba's in the esxi hosts, could I dual-link 8gbps transfers for each host?
Thanks for any help guys!