Switching from iSCSI to FC... need advice

Flapjack

2[H]4U
Joined
Apr 29, 2000
Messages
3,207
Hello, all.

For my home lab, I am currently running three Dell R710s. Two R710s running XenServer have four iSCSI paths connecting to a Starwind SAN hosting iSCSI (also four paths) on a Win2k12 OS. The iSCSI paths are each on individual subnets, and are running through a dedicated switch.

I am about to add two more R710s running XenServer to the pool, but am reconsidering iSCSI, as it just does not perform... even with multipathing configured and enabled (only one path is used).

At this point, I believe it has something to do with XenServer, but I've exhausted all available resources. I experienced the same thing when running FreeNAS with multipathed iSCSI.

I currently have four Emulex LPE1150 4GB fibre channel cards. What I'm wondering is if I grab something like a EMC DS-200B (which are dirt cheap on Ebay), would that be a much easier way to get a 4GB connection between hosts and storage?

Of course, I would need cables, and be able to configure all the required bits and bytes in both XenServer and the Win2k12/Starwind server, but it seems like a logical next step to get some more storage bandwidth.
 
Your switch can have a major impact on iSCSI performance, as can configuration. If you want higher throughput, 10GBe is probably easier and quite cheap.
 
Your switch can have a major impact on iSCSI performance, as can configuration. If you want higher throughput, 10GBe is probably easier and quite cheap.
Define "quite cheap". I got the four LPE1150 FC cards for free, and the following 4GB FC switch is only $50.

Don't get me wrong... I would LOVE 10GBe, but I have not found an inexpensive way to do so.

EMC DS-200B 16-Port 4GB Silkworm 200E Fibre Switch 16x Active Ports 8x SFPs

Of course, I know very little about FC vendors. For all I know, this switch could require licensing, not work with the Emulex cards, which may/may not work with XenServer and/or Win2k12, etc... I have a lot of research to do.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
that's a brocade switch.. good luck getting firmware for it. I maintained those switched about 7 years ago. buuuut.. the only licenses you need are for ports and sfps.. if it comes with the sfps then you have the licenses. Its a solid switch, mine would be up over a year before Id reboot or update them. lpe1150 cards are compatible with that switch given the right firmware loaded on the card. Hopefully they are Emulex branded cards, and not Dell Branded cards. If they are Dell Branded Emulex cards, throw them in the trash.. OEM Emulex firmware will not load most times, and you have to use the dell version and the dell tools to do it. If you don't have support to dells support pages with a login, they will not give it to you either. Even if its got the Emulex chip right there on the pcb.. if its got dell anywhere on it, only dell tools will work to update it.

with that said.. does your storage support FC? You can't just throw some FC cards into your storage array and it'll work.. your array must support the FC protocol. I checked out starwind san, and it doesn't appear to do so.

You said you setup multipath with one path.. That's not multipath.. that's fixed path. just because you checked the box in the drop down doesn't mean its now multipath. multipath = multiple links. Unless I'm misunderstanding you and multiple paths are linked in, but there is only one active path. if that's the case, change it to round robin. Assuming your array supports that.

4gig is dead technology, and with how fast the industry is letting it die, you should do the same. Very few vendors support it now, and others have just dropped support for it. Meaning one day it'll just quit working. In fact, lpe1150 cards are no longer support by Emulex or dell. They quit releasing drivers and updates a ways back for those cards.

Yes, you spent 50$ on a switch and got some free cards, but if you are looking for better performance.. 4gig fc is not the way to go.
 
I have tested iSCSI on a Quanta LB6M switch and a few Mellanox cards. The performance was just beyond my wildest expectations. No special tuning either.
 
  • Like
Reactions: ST3F
like this
that's a brocade switch.. good luck getting firmware for it. I maintained those switched about 7 years ago. buuuut.. the only licenses you need are for ports and sfps.. if it comes with the sfps then you have the licenses. Its a solid switch, mine would be up over a year before Id reboot or update them.
At this point, I've kind of ruled out that switch. Mostly because of stories I've read regarding management via java (old version required) being a pita. At this point, I'm looking at one of the HP branded switches (like a 7394A) that have the power pack licensing. They're not much more than the Brocade I linked earlier, and seem like a whole lot more hassle-free.

lpe1150 cards are compatible with that switch given the right firmware loaded on the card. Hopefully they are Emulex branded cards, and not Dell Branded cards. If they are Dell Branded Emulex cards, throw them in the trash.. OEM Emulex firmware will not load most times, and you have to use the dell version and the dell tools to do it. If you don't have support to dells support pages with a login, they will not give it to you either. Even if its got the Emulex chip right there on the pcb.. if its got dell anywhere on it, only dell tools will work to update it.
I don't recall seeing "Dell" anywhere on the card, but I will definitely look again. At any rate, I don't believe I will use it, as FreeNAS (likely what I'll use) doesn't support it... at least last time I checked. For FreeNAS, it seems like the only truly supported cards are the Qlogic ones.


with that said.. does your storage support FC? You can't just throw some FC cards into your storage array and it'll work.. your array must support the FC protocol. I checked out starwind san, and it doesn't appear to do so.
Yes, Starwind does not seem to support FC. If I did attempt to switch to FC, it would be using FreeNAS... or something else, if you can recommend an alternative.


You said you setup multipath with one path.. That's not multipath.. that's fixed path. just because you checked the box in the drop down doesn't mean its now multipath. multipath = multiple links. Unless I'm misunderstanding you and multiple paths are linked in, but there is only one active path. if that's the case, change it to round robin. Assuming your array supports that.
What I meant was that even though I have multipath setup (four 1gbps ethernet connections), only one path seems to be utilized. Even though querying the multipathing in XenServer (multipath -ll) shows all four links active, traffic only seems to traverse one path... even when booting multiple VMs on multiple storage repositories. I have configured round robin until I'm blue in the face. Whether it was through FreeNAS or Starwind, neither would utilize all paths. I've been working with XenServer gurus on the Citrix forums (and I'm a Citrix employee!), but no one can seem to help. I've chalked it up to an issue with XenServer, and am also considering switching my hypervisor (I will actually try that before I try FC).


4gig is dead technology, and with how fast the industry is letting it die, you should do the same. Very few vendors support it now, and others have just dropped support for it. Meaning one day it'll just quit working. In fact, lpe1150 cards are no longer support by Emulex or dell. They quit releasing drivers and updates a ways back for those cards.
4GB may be "dead" from a prod network stance, but for home lab use, it is 4x what I am currently seeing.... more if you consider you need multiple connections/transfers to fill up a 4x1gb multipath iSCSI setup (assuming it is working as designed). It is also dirt cheap. I would love to go to 10gigE or 16GB FC, but unless you have some ideas on affordable hardware, it's probably out of my budget. I guess what I could do is bypass the switch and go with a multiport card on the SAN/NAS side, and connect each hypervisor that way. I will have no more than 4 hosts at any given time, so that may be a viable option.


Yes, you spent 50$ on a switch and got some free cards, but if you are looking for better performance.. 4gig fc is not the way to go.
The cards were free because they came with a pair of R710s I bought. I have not bought the switch yet. I was hestitant to drop any money on a switch without talking to some storage gurus first. :)


I have tested iSCSI on a Quanta LB6M switch and a few Mellanox cards. The performance was just beyond my wildest expectations. No special tuning either.
Are you talking about this switch? Quanta LB6M 10GB 24-Port SFP+ Switch with 2 x SFP+ transceiver modules

If so, that is a reasonable price, though it doesn't seem to come with any SFPs. It is also a regular "switch", so I could just do iSCSI over 10gigE, right? Would I be using copper Cat6, or fiber? ...or does it depend on what SFPs I get? Which Mellanox cards are you using?

I am very interested in this approach, as to be honest, I really did not want to go back to FreeNAS. It works, but the Starwind box running on Win2k12 just works. I never have to touch it, it never blips, etc. Of course, I don't get the benefit of ZFS, but I'm fine with that.

All-in, it seems like this will be pricier... but will be easier to use/setup, with more options for the SAN side. What would you say you put into your whole setup?
 
At this point, I've kind of ruled out that switch. Mostly because of stories I've read regarding management via java (old version required) being a pita. At this point, I'm looking at one of the HP branded switches (like a 7394A) that have the power pack licensing. They're not much more than the Brocade I linked earlier, and seem like a whole lot more hassle-free.

I don't recall seeing "Dell" anywhere on the card, but I will definitely look again. At any rate, I don't believe I will use it, as FreeNAS (likely what I'll use) doesn't support it... at least last time I checked. For FreeNAS, it seems like the only truly supported cards are the Qlogic ones.

Yes, Starwind does not seem to support FC. If I did attempt to switch to FC, it would be using FreeNAS... or something else, if you can recommend an alternative.

What I meant was that even though I have multipath setup (four 1gbps ethernet connections), only one path seems to be utilized. Even though querying the multipathing in XenServer (multipath -ll) shows all four links active, traffic only seems to traverse one path... even when booting multiple VMs on multiple storage repositories. I have configured round robin until I'm blue in the face. Whether it was through FreeNAS or Starwind, neither would utilize all paths. I've been working with XenServer gurus on the Citrix forums (and I'm a Citrix employee!), but no one can seem to help. I've chalked it up to an issue with XenServer, and am also considering switching my hypervisor (I will actually try that before I try FC).

4GB may be "dead" from a prod network stance, but for home lab use, it is 4x what I am currently seeing.... more if you consider you need multiple connections/transfers to fill up a 4x1gb multipath iSCSI setup (assuming it is working as designed). It is also dirt cheap. I would love to go to 10gigE or 16GB FC, but unless you have some ideas on affordable hardware, it's probably out of my budget. I guess what I could do is bypass the switch and go with a multiport card on the SAN/NAS side, and connect each hypervisor that way. I will have no more than 4 hosts at any given time, so that may be a viable option.

The cards were free because they came with a pair of R710s I bought. I have not bought the switch yet. I was hestitant to drop any money on a switch without talking to some storage gurus first. :)

Actually, don't worry about Java.. just do it from command line with ssh. The command line with copy and paste is much faster then gui clicking. Just because its an HP switch doesn't mean anything.. its still a brocade switch, and they all run roughly the same firmware. The same issues you'll have on the other you linked, you'll have with the HP. The main difference between the 2 is cpu and memory. The big one was the amount of zones you can create. On the original one its like 128 zones or something, the other its closer to 512. Again, with both, good luck getting the firmware for them as you most likely need a valid support contract for them. If those HBAs came with the server, then they maaay be tied to the service tag which may get you access to the drivers or firmware. You'll want to boot from USB and use the dos packages to update things. I can't recommend you an alternative that does FC. Entry point for FC cost is so high that its not feasible for most people and most consumer home solutions would be to difficult to develop for. I ran into some goofy issues with onboard dell 1gig ports, and updating the firmware resolved it. That may resolve your multipath issues.

No, 4 gig is dead, and not good for the home use. Why? When the vendor stops making drivers and firmware updates for the hardware, at some point you'll patch your hypervisor, or what ever virtualization platform you decide to use and the cards themselves will just stop working. If they continue to work, you'll have inconsistent performance and really obscure issues to trouble shoot. Worst part? No support, and no one will help you. You'll either have to down grade or never upgrade. I have first hand experience this as I type this.

For home use, iscsi is your best bet. iscsi isn't going away anytime soon.
 
Unfortunately, the R710s are as updated as updated can be. I also have a slew of NC364Ts that I've tried as well. Same behavior (only one path utilized). I really think it may be a XenServer issue, so I may install ESXi on a test server over the weekend and see how it does.

Alternatively, what do you think of olavgg's 10GigE advice?
 
You need SFP+ tranceivers which you can find for 10-15 USD on Ebay. You can also use DAC SFP+ cables. I highly recommend getting this deal! 10G network card + DAC cable. You don't need a switch either, direct connection works also. But you are limited to two computers.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
You need SFP+ tranceivers which you can find for 10-15 USD on Ebay. You can also use DAC SFP+ cables. I highly recommend getting this deal! 10G network card + DAC cable. You don't need a switch either, direct connection works also. But you are limited to two computers.
Damn. That's an awesome deal. What about the switch. Did you see the link I posted earlier? Or are there better deals to be had?

Quanta LB6M 10GB 24-Port SFP+ Switch with 2 x SFP+ transceiver modules
 
As an eBay Associate, HardForum may earn from qualifying purchases.
FC can have advantages over iSCSI due a lower latency and protocol overhead but only when you are at the same performance level. A 4G FC solution will never be comparable fast than a 10G iSCSI solution but will add a high amount of extra complexity. 8G FC may be nearly as fast than 10G iSCSI. I would do this only if FC knowledge is the prime factor.

I would never think of such a switch with performance in mind but would use iSCSI and 10G either with a 10G switch (quite expensive) or if with dedicated links from your storage to each host.

example:
If you buy used 3 x Intel X520-2 nics ex from Ebay you can connect up to 6 clients with a dedicated 10G link. You can use an Intel X520-1 there. No need for a switch and unless 7m are ok you can use quite cheap copper DAC cables. There are cheaper nics like older IB cards or with a Tehuti chipset like the Synology cards but I would avoid as driver support is or will be bad in future. You can also use the 1G nics for dedicated links. I am just testing such a solution in my lab with the Intel X710 in mind (40 GbE or 4 x 10 GbE per each nic adapter) and Solaris/ OmniOS ZFS storage with bridging enabled (use the storage box like a 10G switch). This will allow all clients to see each other and connect to the internet/ uplink on the 1G card of the storage. Intention is mainly a silent and cheap high performance office SSD only storage server for 4k video editing teams without an expensive and loud 10G switch.

Beside that I prefer NFS over blockbased access like FC/iSCSI. It is as fast but much easier to use especially as you can have multiuser access to the storage as well even with SMB example to easily access ZFS snaps/versioning from a Windows host via "Previous versions".

You may also rethink Windows vs ZFS
ZFS offers a poolbased storage virtualisation with quotas and reservations (without partitions of a fixed size) and a much higher level of datasecurity than Windows ntfs and even ReFS and its in nearly any case faster with its advanced read and write caching features based on RAM or SSDs. Performance of ZFS scales very well sequentially over number of datadisks and iops over number of the vdevs that a pool is built from. If you need a powerloss secure sync write behaviour like with a hardware raid + BBU you can use the ZFS ZIL or dedicated Slog devices. This is much faster and more flexible than the traditional hardwareraid + BBU alternative.

Management of a ZFS webbased storage appliance is much easier than a Windows server even with the much higher feature set that ZFS or Solaris offers.

You may try a Solaris or OmniOS (free Solaris fork) based appliance as well. This is where ZFS comes from with best of all integration of the OS with ZFS and kernelbased services like SMB, NFS or iSCSI/FC with Comstar., see Configuring Storage Devices With COMSTAR - Oracle Solaris Administration: Devices and File Systems

Setup and management can be according to my HowTo, see http://www.napp-it.org/doc/downloads/napp-it.pdf


last remark
Is your disk array with its caching features fast enough. It can be the case that you want to optimize the network while your array is the bottleneck. For VMs reactiveness mostly iops is the limiting factor not throughput. Throughput is important with larger file transfers.
 
Last edited:
Don't buy Intel 10G cards, they are overpriced and use a lot more power than the Mellanox cards. Latency for 10G fiber is 100ns, DAC 300ns, 10GBase-T by comparison is 2-3us. Mellanox has excellent drivers, even for Windows 10.
 
FC can have advantages over iSCSI due a lower latency and protocol overhead but only when you are at the same performance level. A 4G FC solution will never be comparable fast than a 10G iSCSI solution but will add a high amount of extra complexity. 8G FC may be nearly as fast than 10G iSCSI. I would do this only if FC knowledge is the prime factor.

I would never think of such a switch with performance in mind but would use iSCSI and 10G either with a 10G switch (quite expensive) or if with dedicated links from your storage to each host.
While I've seen the light and agree with you on the 10G over FC, based on the recommendations made earlier in this thread, I wouldn't call it expensive. I hadn't even considered it due to the price of new SOHO switches I had looked up. Finding the Quanta switch (see above conversation) was exactly what I needed. I'm in just over $400 for the switch and five 10G cards with twinx cables. Additionally, I can actually use that switch for specific stuff on my network, like my main PC and my HTPC. Both could benefit from 10GigE. Of course, I'll have to find something that runs longer than the cards/cables I purchased. I just run fiber up to it and get a SFP for those physical systems.

example:
If you buy used 3 x Intel X520-2 nics ex from Ebay you can connect up to 6 clients with a dedicated 10G link. You can use an Intel X520-1 there. No need for a switch and unless 7m are ok you can use quite cheap copper DAC cables. There are cheaper nics like older IB cards or with a Tehuti chipset like the Synology cards but I would avoid as driver support is or will be bad in future. You can also use the 1G nics for dedicated links. I am just testing such a solution in my lab with the Intel X710 in mind (40 GbE or 4 x 10 GbE per each nic adapter) and Solaris/ OmniOS ZFS storage with bridging enabled (use the storage box like a 10G switch). This will allow all clients to see each other and connect to the internet/ uplink on the 1G card of the storage. Intention is mainly a silent and cheap high performance office SSD only storage server for 4k video editing teams without an expensive and loud 10G switch.
While a silent solution normally appeals to me, expandability matters even more. Also, the Intel cards are poorly recommended and super expensive compared to others. I would need three of the two port cards to connect the four hypervisors I currently have. For a couple of systems, a switchless solution could work... but not for my setup.

Beside that I prefer NFS over blockbased access like FC/iSCSI. It is as fast but much easier to use especially as you can have multiuser access to the storage as well even with SMB example to easily access ZFS snaps/versioning from a Windows host via "Previous versions".
I wholeheartedly disagree with you on block vs file-based storage. File-based storage just does not have the flexibility that block based does. Also, good luck trying to do MPIO over file-based storage. Yeah, you could use LACP, but then you're adding another configuration item and dependency (switch hardware and NICs have to support 802.3ad). With block-based storage, the client end sees a hard drive and does what it wants/needs with it. Multiple access is no problem for a hypervisor that controls access via locks. I run XenServer and all the hosts in my cluster have no problem with contention.

You may also rethink Windows vs ZFS
ZFS offers a poolbased storage virtualisation with quotas and reservations (without partitions of a fixed size) and a much higher level of datasecurity than Windows ntfs and even ReFS and its in nearly any case faster with its advanced read and write caching features based on RAM or SSDs. Performance of ZFS scales very well sequentially over number of datadisks and iops over number of the vdevs that a pool is built from. If you need a powerloss secure sync write behaviour like with a hardware raid + BBU you can use the ZFS ZIL or dedicated Slog devices. This is much faster and more flexible than the traditional hardwareraid + BBU alternative.
This is actually the first time I've used a Windows-based storage solution (Starwind on top of Win2k12). I've been using one version of FreeNAS or another for the last 10 years. I agree, ZFS is truly a dream. On my Starwind box, I use RAID5. On the other hand, the Starwind server is much easier to manage (yes, Windows can be very easy to manage if you know what you're doing) and is overall more stable that my FreeNAS machines ever were. Even with a fast SSD as my L2ARC drive, the Starwind box serving up iSCSI targets on a RAID5 arraty (Dell H700 controller) is faster, and the battery protects me from the RAID5 write hole. Of course, I miss snapshots and the ability to add drives as needed, but it's not all that bad. My current backup solution is enough to keep me out of trouble. When I was using FreeNAS, not being able to cluster was a real let down. I was limited to replicating snaps to another FreeNAS machine. Having to restore in that case would've been a mess. I'm glad I never needed to.

Management of a ZFS webbased storage appliance is much easier than a Windows server even with the much higher feature set that ZFS or Solaris offers.
I disagree. Both are just as easy to manage as the other. I don't know if you haven't used the latest Windows server OSs, but they're 100% stable for me, easy to manage, and can cluster Starwind (FreeNAS cannot).

You may try a Solaris or OmniOS (free Solaris fork) based appliance as well. This is where ZFS comes from with best of all integration of the OS with ZFS and kernelbased services like SMB, NFS or iSCSI/FC with Comstar., see Configuring Storage Devices With COMSTAR - Oracle Solaris Administration: Devices and File Systems

Setup and management can be according to my HowTo, see http://www.napp-it.org/doc/downloads/napp-it.pdf
I've actually considered trying napp-it and/or openindiana in the past, but to be honest, I just don't have the time to mess around with that stuff. With Win2k12 and Starwind, I can have a SAN up in less than 2hrs.

last remark
Is your disk array with its caching features fast enough. It can be the case that you want to optimize the network while your array is the bottleneck. For VMs reactiveness mostly iops is the limiting factor not throughput. Throughput is important with larger file transfers.
I am absolutely maxed out on ethernet. I certainly do not have the fastest storage setup... by any stretch of the imagination, but it is more than enough to fill several 1gbps connections. Even a three-drive RAID5 array with my WD Red 7200rpm drives will flood a 1gig connection.
 
Unfortunately, the R710s are as updated as updated can be. I also have a slew of NC364Ts that I've tried as well. Same behavior (only one path utilized). I really think it may be a XenServer issue, so I may install ESXi on a test server over the weekend and see how it does.

Alternatively, what do you think of olavgg's 10GigE advice?

Not sure if ESXi will do true multipath if its not licensed, so you may need that. Not 100% sure.

He makes a good point. 10gigE on a basic switch doing something in a class C subnet would net you the biggest performance gains with minimal configuration. It would walk all over a 4gig connection. 8gig? Uhh.. It might be close. FC is going to be lower latency then iSCSI due to it basically being a point to point protocol. In theory 10gig is "faster" then 8gig FC, but 8gig FC is much more efficient with out all the iscsi tcp/ip over head. That race would be close. I'll be 100% honest with you.. There is a certain "wow" factor in running FC at home. But the wow factor wears off pretty quickly when you realize its more of a pain then its worth. Sounds like you are looking at a 10gE solution. It may seem more expensive then 4gig FC, but 10gig is going to be around and developed much longer then 4gigfc. Your 10gig cards and infrastructure you buy today will easily last your 5 or 6 years. 4gig FC4 has been dead and in a nonsupported state for easily 3 years if not more.
 
Not sure if ESXi will do true multipath if its not licensed, so you may need that. Not 100% sure.

He makes a good point. 10gigE on a basic switch doing something in a class C subnet would net you the biggest performance gains with minimal configuration. It would walk all over a 4gig connection. 8gig? Uhh.. It might be close. FC is going to be lower latency then iSCSI due to it basically being a point to point protocol. In theory 10gig is "faster" then 8gig FC, but 8gig FC is much more efficient with out all the iscsi tcp/ip over head. That race would be close. I'll be 100% honest with you.. There is a certain "wow" factor in running FC at home. But the wow factor wears off pretty quickly when you realize its more of a pain then its worth. Sounds like you are looking at a 10gE solution. It may seem more expensive then 4gig FC, but 10gig is going to be around and developed much longer then 4gigfc. Your 10gig cards and infrastructure you buy today will easily last your 5 or 6 years. 4gig FC4 has been dead and in a nonsupported state for easily 3 years if not more.
I agree with everything you've said. When I was considering FC, it was due to price. I've deployed fiber and FCOE before (Cisco Nexus 7606 in a NetApp/VMware environment), but it was much easier with all the appropriate pieces laid out in front of you. I could give two hoots about the wow factor, and finding all the right stuff for cheap (that would not have to worry about licensing for) was definitely a concern. What I didn't consider was that the FC stuff was essentially dead, and that there were actually affordable 10GigE solutions out there.

I ended up ordering the switch I linked earlier. I paid $349 after tax and shipping. I made an offer on five of the cards+cables olavgg linked, and settled on $90 with the seller. $450 for 5 servers is not bad at all. Plus, knowing it should work for a while is a good feeling.
 
Might also look at Brocade 1020 cards as well. I've been able to pick those up fro sub-$50 many times in the past.
 
I got a pair of Mellanox cards (single port) off ebay for $36 for both, and a $20 SFP cable and used that for my PC -> Server connection. $56 and I was running 10Gig. With a 2nd pair assuming you have enough PCIe in the servers you could connect up a 2nd server etc.

For the youtube generation:
 
I have tested iSCSI on a Quanta LB6M switch and a few Mellanox cards. The performance was just beyond my wildest expectations. No special tuning either.
+1000000 !

Yes, Starwind does not seem to support FC. If I did attempt to switch to FC, it would be using FreeNAS... or something else, if you can recommend an alternative.
Previous version of Starwind was able to convert FC to iSCSI
 
Using StarWind with X520 & X540 cards, 3 years at cruising altitude & periodic cleaning with a duster. The ability to omit the 10G switch rocks!
If you decide you need one - check Netgear as ultra affordable stuff, or Mellanox SX1012 if you need top performance.
PS: Used Mellanox appears on ebay for ~$3-4K from time to time. Costs like a kidney, but it's already 50% off list.
 
You need SFP+ tranceivers which you can find for 10-15 USD on Ebay. You can also use DAC SFP+ cables. I highly recommend getting this deal! 10G network card + DAC cable. You don't need a switch either, direct connection works also. But you are limited to two computers.

do those work in vmware? i could not find a reference for them.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Back
Top