10 Gigabit Switch Fibre Question

board2death986

[H]ard|Gawd
Joined
Aug 13, 2005
Messages
1,404
I help run IT at a small refurbishing company and we use a PXE server to image most of our systems. We currently run it over a gigabit copper network but we are running dry on bandwidth. The guy who supplied our server recommended and upgrade to switches with 10 Gigabit fiber optic ports and I've found a good switch, I'm just lost on what kind of fiber cable to get.

I'm seeing 2,4,6,12,24 strand cable and connecters for SC or LC and I'm not sure what I would need. I'll be using two of these switches http://www.newegg.com/Product/Product.aspx?Item=N82E16833122436 along with this adapter in our server http://www.newegg.com/Product/Product.aspx?Item=N82E16833106044. As far as bandwidth we may be imaging as many as 70 systems at once pushing images anywhere between 4 and 25GB

Any help is appreciated.
 
How far apart will the two servers be from the switch? If they are close enough I'd suggest that you don't use fiber at all - use SFP+ copper DAC cables. These are copper-twinax cables with the SFP+ connectors on each end that fit into both your switch and adapter card. You can run up to 15M with these per spec, but you generally can't find cables longer than 12M.

Like these.
 
Well this may only be a temporary setup if we get the bigger warehouse we've been eyeballing. But the two imaging stations are about 100ft - 150ft apart right now.
 
Yea I found a vendor. A 150ft run of single mode 2 strand Fiber with LC connectors will run me about $160. Not too bad.
 
Yea I found a vendor. A 150ft run of single mode 2 strand Fiber with LC connectors will run me about $160. Not too bad.

You should be able to do much better than that...over $1/ft for fiber cable is pretty high. Before you buy, check cablesandkits.com. I've found in the past that they offer good deals on onesy/twosy purchases of fiber cable.

Don't forget that you still need SFP+ modules for the NIC and router. You can't just plug the fiber cables in - you have to plug in the SFP+ fiber module first. I don't know about that Netgear switch, but the Intel NIC is picky. It will not recognize anything but a "real' Intel SFP+ module (or its Chinese knockoff, but you probably want to avoid those). For your application you want to get "SR" (Short Range) modules for both ends.

For the Intel NIC you need one of these: Intel E10GSFPSR Ethernet SFP+ SR Optics.

For the Netgear switch you might get away with any SFP+ module, but if it enforces vendor checks you probably want one of these: PROSAFE 10GBASE-SR SFP+ LC GBIC AXM761

In both cases - shop around. You can find much better prices than these from Newegg.
 
Before you make a final committment to that Netgear switch, you might want to take a read through this study document by Miercom. Though this specific Netgear switch is not mentioned there are a number of issues you might want to learn about discussed in this report - and the three "24x gbe + 4x 10bge" versions of the switches they do compare are all available in the same price range you are shopping in.

Personally, I think the Cisco SG500X-24G would be a much better choice for you.
 
Make sure you get multimode fiber and multimode sfp. Singlemode fiber is for long distance runs.
Also before you buy anything do some network bandwidth and disk usage checks when you are imaging. If your server can't flood a 1gb link as it is now, upgrading to 10gb won't make a difference.
 
Last edited:
10GBase-T would be a bad idea?
//Danne

10Gbase-T would be wonderful. Unfortunately, there are no reasonably priced switches out for it yet. I understand there are several in the pipeline from Cisco, HP, Netgear, D-link, etc. but nothing has been released yet.
 
Make sure you get multimode fiber and multimode sfp. Singlemode fiber is for long distance runs.
Also before you buy anything do some network bandwidth and disk usage checks when you are imaging. If your server can't flood a 1gb link as it is now, upgrading to 10gb won't make a difference.

Hmm well I have about 28 machines deploying a 3GB image and the gigabit connection is completely saturated. Showing 96-98% utilization.
 
Last edited:
Here's what I got so far:
Switch: http://www.newegg.com/Product/Product.aspx?Item=N82E16833122436
Switch module: http://www.supplysale.com/Item.aspx...p=01493132OP&gclid=CPWKku288rQCFaN_QgodalYAXQ
Multimode Fiber 40M: http://www.cablesandkits.com/lclc-gigabit-multimode-duplex-50125-fiber-patch-cable-45m-p-3829.html

Planning on running it from a Dell Poweredge T620 with a PCI-E SSD drive for distribution. Can you say, Holy Read Speeds Batman? http://www.newegg.com/Product/Product.aspx?Item=N82E16820227741
Would get the Intel X520 10Gb SFP+ adapter to connect to the switch.
 
Last edited:
I currently have about 30 images on the server, but once we get our 64-bit versions finalized that will nearly double. On top of that i'm adding customer images as we get them instead of relying on hard copies. . we deploy anywhere from 100-300 images a day between our PXE server and hard drive duplicators, but I'm trying to rely less on the duplicators going forward.
 
You need 50/125 OM3 or 4 Aqua Colored fiber for Multimode gear - Typically 850nm waveL / you can perfectly use OM1-2 Orange colored cabling as well. It will perfectly support 10gb but in very limited ranges usually sub 80meters max.

You need 9/125 SingleMode usually colored yellow for singlemode gear - Typically 1310 nm waveL

Range is dependent on type of fiber and type of transceiver...
SInglemode can stretch 10+ KM up to many hundreds.
Multimode (COMMON for lans) has a max of about 2km and that is pushing it. Typical range is 300Meters.

SC, LC, all those type connectors are based on the type of plug your gear uses and is absolutely irrelevant to data throughput. Make sure you pick the right end connectors that match your gear. They make LC to LC, SC to SC, LC to SC, ST to SC., LC to ST, etc.... and converters etc...

Your less expensive 10g gear is going to be Multimode 850nm ShortReach type.

I use fiber in my home and I use 10gbit 850nm Cisco Laser based transceivers using ShortReach modules and let me tell you it is BLAZING fast.

LongReach transcievers can go many many miles/KM's you probably do not need those.

I will not get into the types of signaling right now past what I have explained, there are quite a bit, i.e.

SR, LR, LRM, LX4, ER, ZR... etc... all have different range and singnaling standards. You will never need ZR and your boss will fire you if spend money on ZR equipment LOL It has a range of up to 80KM.
 
Wow, that was some detailed info. . .Well for now the distance between my two main switches is about 100ft, so the lowest end sounds like it can handle it fine. If we get the bigger warehouse space I've had my eye on I'll be able to consolidate my department and the switches won't be more than 30ft away from each other.
 
Just curious, have you considered looking into doing a LACP trunk line using several 1Gb lines? Most decent 1Gb switches will let you bind up to 8 1Gb lines together. You'll only get 8Gb instead of 10Gb but at very little or no cost depending on what hardware you already have.
 
Just curious, have you considered looking into doing a LACP trunk line using several 1Gb lines? Most decent 1Gb switches will let you bind up to 8 1Gb lines together. You'll only get 8Gb instead of 10Gb but at very little or no cost depending on what hardware you already have.

Great idea however OP. 802.1ad does not perform this way.

It allows 2 things, many more, but for the sake of this conversation 2 things.

1. Link redundancy, if a switch port fails or a cable is broken, or NIC dies it will failover.
2. It will not allow 8gpbs (in this example) of bi-directional bandwidth to any one host. You will still be bandwidth limited to 1gps per host PERIOD. Where it does allow 8gbps is when 8 or more host are simultaneously pulling and pushing data to the server on the other end. Even then you better have one badass switch or seriously badass set of NICs because the overhead for TCP and LACP control frames are very high.

It is a great way to bind switches together via trunks and it is also a great way for servers to handle many hundreds of 100mbps or 1gbps PCs and other hosts much smoother.

The only way to truly get the bandwidth host to host or switch to switch is to step up to 10GBps Ethernet, or 10,20,40, gbps infiniband.
 
I use fiber in my home and I use 10gbit 850nm Cisco Laser based transceivers using ShortReach modules and let me tell you it is BLAZING fast.

Im new to the server world, but very educated in computers. Would you mind sharing your setup for you 10gb network? I desperately want to build one but don't know where to start. thanks!
 
Pretty sure I'm going with fiber. I'm only connecting the two 48 port switches and just trying to get max bandwidth since we are looking to have a crazy summer this year. We project trying to image around 1200 systems a week, imaging is the bulk of the time we spend handling the systems so any time savings are huge when multiplied by the numbers we produce over our summer months.
 
What are the switches that are going to be trunked or uplinked? you need to make sure there are SFP or X2 that supports 10g on those switches first.
 
how are you imaging the machines?

with ghost we used multicast so we would get gig to each machine instead of each machine using its own bandwith through the network
 
Im new to the server world, but very educated in computers. Would you mind sharing your setup for you 10gb network? I desperately want to build one but don't know where to start. thanks!

Well 10g/e is expensive tech still but you can get good deals on ebay. The most expensive component is going to be a switch however you can circumvent the need for a switch by direct connecting your 10g/e nics together if you only intend to have two 10g connections in your home or office.

I am running the following gear:

Cisco 3750E 24 port Gigabit Layer 3 router w/ 2 X2 ports.
Cisco X2 form factor 10gb Shortreach 850nm laser modules x2 installed in X2 ports (SC connector type)
Intel 10gb XFSR server single port LC connector type cards pci-e x8 2.0 interface (these are nuclear blazing fast cards)
Monoprice 50/125 OM3 Aqua colored fiber. I have 2 fibers running and fast they are. (SC to LC connector type, SC on one end and LC on the other)

I am running a 10g nic in my sig rig below and one in my ZFS Xeon E5-1620 3.6ghz 32GB ECC NAS. I am maxing my sig rigs i/o at around 550-600MB/s tranfer rates sustained and there is a lot more room left in my ZFS box and definitely a little more on this 10gb fatpipe.

I think all in all I paid about $2000 for my L3 3750E router (switch)
$300-350 for the two NICs
$80 for one X2 10g module and $120 for the second module
Brand new fiber from monoprice was like $14.00 each and I got about 5 fibers, good to have surplus.

About $3000 for everything. But I had the router for a while now so I said what the hell and went all 10gigitty haha
 
how are you imaging the machines?

with ghost we used multicast so we would get gig to each machine instead of each machine using its own bandwith through the network

We are using Microsoft's RPK tool. Because we deal with so many configurations and types of systems, this tool allows us more flexibility with our images.
Instead of having to run sessions for each image, RPK adds deployments to WDS and are selectable by each individual system after booting to the PXE server. I can take a couple of shots of this tomorrow at work. Since the sessions are individual if units fail it doesn't affect the "batch". We use this alongside a couple of Omniclone hard drive duplicators depending on our needs.
 
Last edited:
Back
Top