High bandwidth links between switches

CyberPunk_1000

Weaksauce
Joined
Jan 31, 2004
Messages
83
Just a bit of theroy for this one.

I was thinking that if i had say 10x 24port 10/100 switches with two 1gig uplinks on each, and were then to connect these to a 10 port 1Gbps switch, if all 24 users were loading the switch at 100% it would become bottlenecked at the uplink to the 10 port switch. i was just wondering whats the usual approch to create links greater than 1gbps. i dont think uplinking the switch would work twice with out it being routable etc would it ? any ideas ?
 
Originally posted by CyberPunk_1000
Just a bit of theroy for this one.

I was thinking that if i had say 10x 24port 10/100 switches with two 1gig uplinks on each, and were then to connect these to a 10 port 1Gbps switch, if all 24 users were loading the switch at 100% it would become bottlenecked at the uplink to the 10 port switch. i was just wondering whats the usual approch to create links greater than 1gbps. i dont think uplinking the switch would work twice with out it being routable etc would it ? any ideas ?

some switches allow you to "bond" to uplinks, so using that with 2 1Gb/s links would give you 2Gb/s (theoretically speaking)... so then theoretically you would still be short, but only by 400Mb/sec.... they have 10Gb/s switches available now and have had them for a short period of time, but i think you'll see those in bigger enterprise/isp type environments since its is still very costly per port.....
 
you would need all your switches to be managed if you want to "bind" the uplink ports... and more than likely the same manufacturer for all the switches
 
So i could basicly bond lots of channels between critical switches. also what layer is that 2 or 3 ?
 
usually layer 2, unless you want to do layer 3 switching...

there are different ways of calling it, nortel calls it Multi-Link Trunking, Cisco calls it EtherChannel or something like that...

like i said, you have to make sure that the switches support it, and more than likely they are going to have to be from the same manufacturer...
 
So in the cisco world, like flecom said, its called etherchannel or port channeling.

A port-channel can have up to 8 similar interfaces tied together. So in that scenario, one could have up to 8 GB throughput between your IDF and MDF.

You'd be hard-pressed to find end devices that could push your trunk links (for an average enterprise at least..)

An extreme setup comes to mind when i was talking with Foundry abou their setup fo ILM (Industry, light and magic). I think they said each closet had multiple 10 gig links to the core.... wowza!!

BTW... the way you get around needing more than 8GB of throughput is to go 10 gig. The prices are coming down but you will still pay at least 10k per port.

Also, most enterprise switches (cisco, foundry, extreme, nortel...uhm...dell) are compatible with the 802.3ad standard (which is the IEEE standard for trunking ports) also known as LACP.

Also, like flecom stated, it is highly recommended sticking to same brand switch manufacturers throughout the enterprise.
 
Originally posted by Darthkim
ILM (Industry, light and magic).

ILM = Industrial Light & Magic

;)

ya and 1gigabit is a lot, 8 gigabits is a hell of a lot... multiple 10gbit links is like...
eek2.gif
 
Thanks Flecom.

Here is the link to the article regarding foundry and ILM 200 10G interconnects

i want this network...

Also, one correction. Apparently cisco is now shipping their 4 port 10 gig card for around 20 grand... thats 5 grand per port.... hm.... :D :eek:
 
....

terabit backbone...

drool.png


ughr... brb need to change underwear...
 
Back
Top