What's less trouble, CX4 or 10GBASE-SR?

unhappy_mage

[H]ard|DCer of the Month - October 2005
Joined
Jun 29, 2004
Messages
11,455
I'm in the process of buying some 10GBe stuff to play around with, and I'm trying to get used gear off eBay so I don't have to spend a fortune to do so, but I'd like the option of buying recent equipment for some pieces. I bought a switch which has some 10G uplinks in the form of XFP slots, and that limits my options a little.
So, the question is this: which of these options locks me in to the least-worst standard? I don't like the idea of an XFP-SFP cable, because no part of it is reusable in the event that I introduce new hardware to the mix. Is CX4 still recommended for new deployments? Does 10GBASE-SR have legs, or has 10GBASE-T eaten its lunch? Can I buy XFP to BASE-T transceivers?

My current inclination is to go CX4, as it seems you can still get relatively recent hardware with CX4 ports on it, and it's about half the total cost of the other options. Second choice would be 10GBASE-SR, despite the extra fun that comes with involving fiber in any system.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
10GBase-sr still wins for power consumption over 10Gbase-t, but I see nothing wrong with CX4 - I don't expect to see CX4 phased out for at least a few more years considering HP is still building flex modules using CX4 ports for their blade chassis.
 
CX4 is already gone...which is why it's cheaper. HP might still be making flex modules but that's it. No one else is. The CX4 form factor is too big. Everyone else has moved to SFP+. If your distance is close the cheapest option is SFP+ and TwinAx cables. Else go 10GBase-SR.
 
Yep, VMware basically killed their CX4 support in v5. :(

How so? I'm using CX4 NICS with ESXi 5.1?

I'm using CX4 with no issues. CX4 does have big cables but that's not the issue, I think its more about the short distances. For the right situation its fine.
 
Ive been phasing out cx4 for a while now. 10GB-SR is so much easier to manage. For in-rack connections, look at SFP+ DAC cables. They are usually much cheaper than optics and work well too. The diameter stays small on anything <7m.
 
I've placed component orders for an SFP NIC, an XFP 10GBASE-SR transceiver, an LC/LC OM3 fiber, and an SFP transceiver. Got the whole shebang for $150 :)
 
Myricom 10G-PCIE-8A-C

That doesn't surprise me all that much. At least as of a few months ago, Myricom's chips were PCI-e 1.0. A dual port card had two of them & a bridge chip to talk PCI-e 2.0. We were having performance issues with them & erroneously thought they were running at 1.0 speeds. Turns out it was a problem with our code and the cards were working fine. The point here is that their ICs seem to be a bit old, and if they're still shipping them on SFP+ cards that could explain why the older models still work in ESXi. Same IC, probably the same driver.

I've placed component orders for an SFP NIC, an XFP 10GBASE-SR transceiver, an LC/LC OM3 fiber, and an SFP transceiver. Got the whole shebang for $150 :)

That's more like it. I haven't dug into prices on used 10GbE cards much, but I know you can get Mellanox ConnectX2s for $100 each. Those are pretty nice NICs. IIRC they support SMB direct in Win8 and server 2012. They can also do NFS-RDMA in Linux. If those are $100, I'm sure you can find something cheaper.
 
Back
Top