10Gbe group for my C6100

zankza

n00b
Joined
Mar 7, 2013
Messages
44
I'm looking to create an intranet for my four node C6100 with 10Gbe.

The idea would be having 10Gbe PCIe installed on ecah node, each card should have two connector, and I'd setup bridge between two connector and have every node connect each to other creating data link aggregate.

The four nodes are fail-over cluster, hyper-v, the 10Gbe will be used for Live VM migration etc.

I have good basic understanding but this whole SFP and fiber I'm totally new, and I have bunch of budget to toy around.

Four adapters, eight cables, and what else do I need?
 
A 10Gb switch, probably two for redundancy.

While it is possible to connect two servers directly via ethernet and allow them to talk IP, it is wholly impossible to do this with more than two servers, as only the servers directly connected know how to talk to another. Servers which are more than one device away, won't be reachable.

Maybe you can do something with RRAS, but my brain is going to mush just thinking about it with regards to Hyper-V networking needs.

Get two 10Gb switches and call it a day. :)
 
if the machines are closer to each other, just go cat 6/cat7 rj45 10Gbase-t with a netgear XS708E for under $1k. $350 per intel X540-t1 10g nic.

reviews say the netgear switches are extremely loud, so i hope you have a server closet.
 
Funny you mention this - I actually had to google what a C6100 was... as there was nothing more describing it by the OP other than a marketing name.

But the four nodes are close - like kissing cousin close. :)

poweredge-c6100-overview3.jpg
 
That's the best hardforum can bring up?

I've brought more elegant and better solution.

I'll be buying:
Two of dual QSFP(40Gbe) 200$
Two of dual SFP(10Gbe) 60$
Two QSFP .5M cable
Two QSFP:4x SFP cable

Then have the cards and cable set to like this.

Node A and D has dual QSFP, node B and C has dual SFP.

Node A: 1xSFP:B 1xSFP:C 1x QSFP D
Node D: 1xSFP:B 1xSFP:C 1x QSFP A
 
That's the best hardforum can bring up?

I've brought more elegant and better solution.

I'll be buying:
Two of dual QSFP(40Gbe) 200$
Two of dual SFP(10Gbe) 60$
Two QSFP .5M cable
Two QSFP:4x SFP cable

Then have the cards and cable set to like this.

Node A and D has dual QSFP, node B and C has dual SFP.

Node A: 1xSFP:B 1xSFP:C 1x QSFP D
Node D: 1xSFP:B 1xSFP:C 1x QSFP A

How does B talk to C? Are they also directly connected using the remaining SFP port pair?

How would you IP these four nodes. I would like to see your storage network IP and subnet assignments and your data network IP and subnet assignments before commenting further. You mention that you have an excellent understanding of the virtualization software, throwing SFP into the mix does not change anything. Fiber is simply a mode of communication.

I think that you're really making this overly cumbersome with no apparent reason.
I honestly can't tell if you're trolling or serious.
 
That's why I came here to share and ask for more information.

My current cluster works beautifully, everything works smoothly. I'm just looking into upgrading my 1Gbit to 10Gbit network.

Into my research it has shown that SFP and QSFP is almost similar price, I don't mind getting 40Gbe over 10Gbe if it's couple of bucks more.

I'd love if somebody here can tell me what is the cheapest and simplest way to get all nodes connected at 10Gbe, 4x 10Gbe Card with one switch that has 4x SFP port?
 
How is everything setup right now? Are you using the integrated copper ports on the back of the node?
Does the copper ports handle both data and storage traffic at the moment?

If you go fiber, what happens with your data traffic, will that still go out the copper ports?
 
How is everything setup right now? Are you using the integrated copper ports on the back of the node?
Does the copper ports handle both data and storage traffic at the moment?

If you go fiber, what happens with your data traffic, will that still go out the copper ports?
Currently it has gigabit PCI-e installed, bringing total of three usable NIC. Right now it's configured like this: one is for heartbeat, one is for migration, last one is for SAN.

If I go fiber, I will use the copper for heartbeat and mgmt.

I dont have bunch of benchmarks with me right now, I'm out of states. currently the storage is just SSD, however it is long to transfer 96GB ram from one to other node.
 
Last edited:
If you treat the fiber like copper, then you will be able to basically perform a tear-out and replace.

You will just need to get a switch that can handle four 10Gbps SFP (think thats SFP+) ports for switching, just like you have plugged the SAN copper ports into a switch presently.

Of course, I say tear-out and replace like it is that simple - provided you have your network configuration and Hyper-V configurations documented, because tearing out that PCI copper NIC will cause Windows to delete all knowledge it had of that network, rippling through any other program that required that information.

If the PCI card wasn't your SAN trafic, then you're going to be updating two different sets of networks, the network carried by the PCI card and the network that carried the SAN traffic.
 
The 10g ports on that are CX4 so not what you're looking for. Hence the cheapness.
 
Care to find some examples of what card along switch ? I'd like to try observe please.
 
Also, that switch on ebay is LOL. They modded the case and screwed on an extra fan to the top of it. :)

http://www.cisco.com/c/en/us/support/switches/catalyst-4500x-f-16-sfp-switch/model.html or
http://www.netgear.com/business/products/switches/managed/m7300.aspx or maybe
http://www.netgear.com/business/products/switches/managed/m5300.aspx

These will be more up your alley, but I'm going to be honest with you - my 10Gb usage minimal. I just understand ethernet technologies. Also, don't forget to include the SFP+ transceivers and fiber patch cords as well when you start piecing this all together.
 
Also, that switch on ebay is LOL. They modded the case and screwed on an extra fan to the top of it. :)

http://www.cisco.com/c/en/us/support/switches/catalyst-4500x-f-16-sfp-switch/model.html or
http://www.netgear.com/business/products/switches/managed/m7300.aspx or maybe
http://www.netgear.com/business/products/switches/managed/m5300.aspx

These will be more up your alley, but I'm going to be honest with you - my 10Gb usage minimal. I just understand ethernet technologies. Also, don't forget to include the SFP+ transceivers and fiber patch cords as well when you start piecing this all together.
This is purely lab testing, I'd stick to ebay purchases for now.

This is exactly what I meant by transceivers and fibers patch cords, there seems to be no page or guide regarding all of this. I'm trying to learn all about fiber stuffs but there is no sophisticated or well detailed of comparisons.

I've never had any problems learning Ethernet related stuffs, and taught myself all about networking no problem, there is just simply not enough well-documented stuffs about fiber and the all standards there is on internet.
 
This is purely lab testing, I'd stick to ebay purchases for now.
You're still going to sink some dollars. 10Gb switching isn't exactly common-place.

http://www.ebay.com/itm/Dell-PowerC...10GB-COPPER-Layer-3-Switch-8024-/171241026408 or
http://www.ebay.com/itm/IBM-7309-HC6-90Y3529-90Y9442-G8124-E-BNT-10GB-24-Port-Switch-/181316576729 looks like they might work. There are others there as well - but they're all in the $1,500 to the sky range.

As was mentioned by another on that LOL ebay switch, there is a 10Gb copper standard called CX4, but it has been surpassed by T. CX4 was created and used before the industry could get a RJ-45 terminated cable to meet the demands of 10Gb. It is pretty much a dead standard today. That is why you can find those switches for so cheap - there is very little demand for them.

If you wanted to go with CX for this specific environment I don't see a technical reason why not. You just need to understand that you're going to be purchasing into a technology that is being passed over.
With that said, you can go with this switch: http://www.ebay.com/itm/BROCADE-TRX...BIT-ETHERNET-SWITCH-4X-CX4-10GB-/380760792372
https://www.google.com/search?q=cx4+network+card&prmd=ivns&source=univ&tbm=shop&tbo=u&sa=X for network cards and
http://www.cablesondemand.com/pcate...brary/InfoManage/CX4_CABLES_(10GBASE-CX4).htm for cables. (Just a quick googling - never used any of the vendors listed here)

This is exactly what I meant by transceivers and fibers patch cords, there seems to be no page or guide regarding all of this. I'm trying to learn all about fiber stuffs but there is no sophisticated or well detailed of comparisons.

I've never had any problems learning Ethernet related stuffs, and taught myself all about networking no problem, there is just simply not enough well-documented stuffs about fiber and the all standards there is on internet.

Alrighty then :)

Some primers:
10Gb Optical Standards: http://en.wikipedia.org/wiki/10-gigabit_Ethernet#Optical_fiber
If you go with fiber, for this specific case, you will use 10GBASE-SR fiber. It is the cheapest fiber and is good for going short distances like your lab.

10Gb Copper Standards:
http://en.wikipedia.org/wiki/10-gigabit_Ethernet#Copper
If you go with copper, there are two choices.
If your distances are short (less than 7 meters) and are in the same room, then for simplicity sake, you would use SFP+ Direct Attach. I would probably take this route over fiber, provided that this short-range setup is long-term. This is a cable that has SFP+ transceivers permanently attached at both ends. We use these cables for switch-to-switch connections. It is fantastically simple and it just works.

If you need to go through walls or greater distances AND still want to use copper, then you would go with 10GBASE-T. I say want to use copper, because honestly - fiber would be a sure bet over copper, at these speeds... and likely cheaper too.

And the last thing is the transceiver. There are a few standards out there but it is all dependent on the format that the network card maker and switch maker decided to use. http://en.wikipedia.org/wiki/10-gigabit_Ethernet#Physical_layer_modules
As you can read here, there are basically two forms, XENPAC and SFP+ It is possible to have a XENPAC transceiver on the switch and a SFP+ transceiver at the host communicate together, provided they are talking the same Ethernet standard - in your case, we said that 10GBASE-SR is best if using fiber for your needs. The switches I listed above all use the SFP+ format transceiver. Most SFP+ transceivers use LC fiber connectors, so you would need to use fiber patch cords which had LC-LC connectors on them. LC-LC indicates that LC connectors are on both ends of the cord. http://www.cablestogo.com/category/fiber/62-5-125-duplex-mm-fiber/62-5-125-dpx-mm-taa-fiber is a good place to look for patch cords.

Also, you didn't specify which network card you were looking at for your hosts, but you did say SFP. Make sure they they comply with the SFP+ standard, not SFP. There is apparently only one 10Gb standard that accepts SFP transceivers, and that is 10GBASE-USR

So, to recap:
CX4 is an old copper 10Gb ethernet cabling standard. There are cheap switches to be had, but if you decided to expand this in the future, then you're likely to have to rip out and replace everything with SFP+ network switches and cards.

If you use 10Gb copper, and everything is in the same room - then your best bet is to use the Direct Attach SFP+ cables. They already come with the transceivers, so after purchasing the network cards and switch that have SFP+ ports, you have everything you need to go.

If you use 10Gb fiber, the standard you will want to use is 10GBASE-SR. Just like with copper above, it is probably best to use network cards and a switch that uses SFP+ ports. Once you settle on your SFP+ fiber transceivers, you will need to purchase the appropriate fiber patch cord. Typically SFP+ transceivers use LC connectors.

Note: There really will be no difference in the switch or network cards if you go with SFP+ ports. Then it is just up to you to decide what medium works best for you, copper or fiber.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Fantastic! Took my time to sit and read the whole post as well websites.

I've seen all of the terms you used in the post, I just didn't know which belonged to which group. You've made it clear there is copper and fiber, and what belongs in which one, that has cleared up a lot of things for me IMHO.

It's very obvious that I will only requiring 10Gb copper, even at 1M is overkill, but it appears they are even cheaper than .5M. Like I said, I want to create four-node failover-cluster, I need to be able to have high speed interconnect for all four nodes. Additionally, I'm not looking into upgrading or improving, so I don't mind with the whole passed over stuff, by time when I need new servers I'll get the real proper technology according to the needs.

Although you said there's no difference between them, but what route would you have go with? and why? Additionally, as I explained I have C6100, and four nodes in there is just perfect of balance for me, but I just need a way to move data quickly between servers, what kind of route/technologies would you have used? single gigabit isn't enough. Just for curious sake. I'll probably have more questions but can't think of one atm too.
 
A friend of mines spoke about InfiniBand, I've heard that term couple of times, unsure where that lies within, and not sure what is so different, but by looking at some results in ebay it apparent that Infiniband is really very cheap. So there is must be something bad about this ?
 
A friend of mines spoke about InfiniBand, I've heard that term couple of times, unsure where that lies within, and not sure what is so different, but by looking at some results in ebay it apparent that Infiniband is really very cheap. So there is must be something bad about this ?

I have absolutely no experience or knowledge with InfiniBand, other than what I've read on wikipedia or whatnot, so I couldn't give you an opinion on it one way or another. Sorry.
 
Fantastic! Took my time to sit and read the whole post as well websites.

I've seen all of the terms you used in the post, I just didn't know which belonged to which group. You've made it clear there is copper and fiber, and what belongs in which one, that has cleared up a lot of things for me IMHO.

It's very obvious that I will only requiring 10Gb copper, even at 1M is overkill, but it appears they are even cheaper than .5M. Like I said, I want to create four-node failover-cluster, I need to be able to have high speed interconnect for all four nodes. Additionally, I'm not looking into upgrading or improving, so I don't mind with the whole passed over stuff, by time when I need new servers I'll get the real proper technology according to the needs.

Although you said there's no difference between them, but what route would you have go with? and why? Additionally, as I explained I have C6100, and four nodes in there is just perfect of balance for me, but I just need a way to move data quickly between servers, what kind of route/technologies would you have used? single gigabit isn't enough. Just for curious sake. I'll probably have more questions but can't think of one atm too.

Really, at the end of the day - you want 10Gb connectivity, and there are different ways to provide the same end result. Since you are not concerned with locking yourself into obsoleted technology, it means that it is a numbers game now.

Capture.jpg

The first group is 10Gb over Copper.
The second group is CX4
The third group is over 10Gb over Fiber.

I rounded up from most of the pricing on the internet, to take into account taxes, shipping and any other extra expenses or pricing instability (ebay).

You can see that CX4 is the cheapest - but you'll throw it all away eventually.
10Gb over copper is $600 more and fiber is an additional $300 on top of that.

Hope this helps.
 
FYI if you have not already go and take a look at the server the home forums they have a large number of people playing around with the C6100 over there and a ton of info on them.
 
As you have stated you are willing to go second hand/eBay, have you considered the Brocade 1020 10GB NICs that were popping up on here? It wouldn't help with the cost of a switch but it would save you a significant amount as they could be had for about $35!
 
I'm having hard time finding a reasonable switch, could you please show some switch choices? Pretend if you were in my shoes, what switches would have you went with?
 
Well -

What is your total budget for this? Without a firm number, I can't help you find actual parts.

As you could see from my quick and dirty spreadsheet, the CX route generally had the cheapest switches, but the most expensive NICs and cabling.

The Copper SFP route had the most expensive switch, but had the cheapest combined cable/SFP.

The Fiber was the most expensive combined.

But honestly, I'm not sure I would trust anything much lower than this. Maybe 10% less? Ebay is full of ripoffs and as-is 'deals'.

If you're looking for a $1000 solution, you can look to something like getting some Intel PRO/1000 PT Quad or Dual NICs for about $50 - 150/each and a switch that can support at at least four 802.3ad port groups.

Mind you, depending on how the NICs and switch do the link aggregation, you may only see 1Gb/sec between any two hosts, but could see 1Gb to one host, and 1Gb to another host from the same source host. This isn't something that you would probably be able to determine until AFTER everything is purchased and installed. It is just the nature of link aggregation and how it handles traffic flows.

Also, just out of curiosity, have you benchmarked your actual disk subsystem I/O at its current configuration? What is it capable of reading and writing at? This might help you from overspending on network throughput if your budget is limited.
 
Well -

What is your total budget for this? Without a firm number, I can't help you find actual parts.

As you could see from my quick and dirty spreadsheet, the CX route generally had the cheapest switches, but the most expensive NICs and cabling.

The Copper SFP route had the most expensive switch, but had the cheapest combined cable/SFP.

The Fiber was the most expensive combined.

But honestly, I'm not sure I would trust anything much lower than this. Maybe 10% less? Ebay is full of ripoffs and as-is 'deals'.

If you're looking for a $1000 solution, you can look to something like getting some Intel PRO/1000 PT Quad or Dual NICs for about $50 - 150/each and a switch that can support at at least four 802.3ad port groups.

Mind you, depending on how the NICs and switch do the link aggregation, you may only see 1Gb/sec between any two hosts, but could see 1Gb to one host, and 1Gb to another host from the same source host. This isn't something that you would probably be able to determine until AFTER everything is purchased and installed. It is just the nature of link aggregation and how it handles traffic flows.

Also, just out of curiosity, have you benchmarked your actual disk subsystem I/O at its current configuration? What is it capable of reading and writing at? This might help you from overspending on network throughput if your budget is limited.

I'm capable of spending $2000, but would really like to try keep it around 1500, 1000 if I'm lucky.

My last resort is to buy four quad gigabit, and team them up, but it's not same as one single line, teaming doesn't have same simplicity of power.

The nodes has 96GB RAM, it will take quite time to drain those memory between nodes, plus software-based SAN I have running in background will be using the link as well (starwind).

I am quite novice when it comes to all of this 10Gbe but I have been running hyper-v for a while, with powerful hardware such as PCIe SSDs, plenty ram/cpus, but to take those hyper-v to next level by clustering them, a much stronger interconnect was necessary to keep things flowing smoothly I believe.
 
As an eBay Associate, HardForum may earn from qualifying purchases.
4x 10gbe sr multimode fiber cards - $400 http://www.ebay.com/itm/INTEL-EXPX9501AFXSR-10-GbE-Gigabit-XF-SR-Server-Adapter-/301123126997

1x switch with 4x sfp+ slots - $800-1200 depending on model, check newegg for more suggestions http://www.ebay.com/itm/Netgear-Pro...54?pt=US_Network_Switches&hash=item23367839da

4x compatible 10gbe sr mm fiber sfp+ modules - $160 - http://www.ebay.com/itm/For-Netgear...ck-/200965085732?pt=US_Network_Switch_Modules

4x lc-lc cables length of choice

~$1500?
Love this submission, it's by far the most helpful post I've ever gotten. I honestly expected to get more examples but for some reason it was incredibly so difficult to get bunch of examples...
 
As an eBay Associate, HardForum may earn from qualifying purchases.
Really, at the end of the day - you want 10Gb connectivity, and there are different ways to provide the same end result. Since you are not concerned with locking yourself into obsoleted technology, it means that it is a numbers game now.

Capture.jpg

The first group is 10Gb over Copper.
The second group is CX4
The third group is over 10Gb over Fiber.

I rounded up from most of the pricing on the internet, to take into account taxes, shipping and any other extra expenses or pricing instability (ebay).

You can see that CX4 is the cheapest - but you'll throw it all away eventually.
10Gb over copper is $600 more and fiber is an additional $300 on top of that.

Hope this helps.

Where did you get the quote for that infiniband Tech??? what brand were you modeling after?

I am about to sell off my 3x2-Port (DDR) Infiniband Cards that are CX4 and a (SDR) Infiniband Switch (Unmanaged) and 7x (DDR) CX4-Cables for under 600$
 
Back
Top