Alternatives to port trunking ?

BrainEater

[H]ard|Gawd
Joined
Jul 21, 2004
Messages
1,244
My friends and I have been LANning for years.....slowly but surely , we've been upgrading our network.

At this time , we are just finishing our transition to a full gigabit network.We currently run an SMC 8524T 24 port Gigabit switch. (24 port version of this one.)

Within a couple weeks , we'll also be buying an SMC 8612T Tigerswitch.

Here's the issue :

The network topology will be as follows ;
The 24 port switch will be clients only.It'll be connected to the tigerswitch which will host the uplink/servers.

It's the connection between the two switches I'm concerned about.Now , the tigerswitch , being a fully managed L2 switch supports port trunking/load balancing between up to 4 ports,using the Cisco etherchannel spec....As far as I know the 24 port does not.(still trying to get through to smc)

I'd really like to run a pair of cat6's between the two switches , as the traffic could get pretty heavy , but I'm unsure If I'll gain any benefits without the port trunking enabled.

Is it possible to achieve this kind of load balancing with only 1 switch being managed / trunking capable ?

Any suggestions ?
 
Not really. BTW, generally when people say trunking, they mean ISL or 802.1Q vlan trunking. People usually refer to bundled links as etherchannels or port channels.
If one side supports it, and the other side doesn't, it will not function properly.
Best case scenario, the lower end switch will support STP, and it will block one of the redundant links.
Worst case scenario, you get a bridging loop and your network goes down.
 
Ah ok....I've seen the terms mixed a bit.....Thank you for the correction.

Etherchannel connection seems like what I mean. :D

hmm.

It's been suggested to me , we might achieve a 2 cable , load balance connection between the two switches via a computer with 4 NIC's and appropriate software.....
 
Right, that will work. However, you (last I checked) either need high end NICs that support teaming, or need special software with NICs that will work with it (I know alot of intel NICs work). Now this I am not totally sure on, as I dont work on servers.
However, how much does that new switch you are getting cost? Some of the teamed nics are awfully expensive.
 
BrainEater said:
It's been suggested to me , we might achieve a 2 cable , load balance connection between the two switches via a computer with 4 NIC's and appropriate software.....

The performance would be subpar with a PC, and both switches would have to support whatever link aggregation protocol you intend to use on either side.

If you really need more than a single gigabit uplink(did you test anything to see that you would pull over 1 gigabit? Most high end PC's can't even push 500mbit/s), I'd suggest buying another 24 port switch that supports the technology that you need.

Edit:
That 12 port SMC supports LACP, not etherchannel. They're 2 different technologies, with LACP being the newer/better one.
 
Well technically its LACP vs PAGP. They're both etherchanneling technologies. But they both do pretty much the same thing... most people don't use the features of LACP that it has beyond PAGP.
 
ok...

I thought the tigerswitch supported both LACP and etherchannel....

yea.i'm pretty sure we'll exceed 1 gbit sec easy........we're not talkin 2 clients here.....we need to run 40 clients.....22 clients from switch 1 to switch 2and the servers.....

Obviously , this isnt a continuous thing , but at times the traffic could be insane......think 16 people transferring large files+10 peeps gaming.

I'm basically just searching for the best topology i can get.I cant afford another switch at this time.

Thx for the help guys !

:D
 
Depending on the topology of your clients requirements and the capabilities of their individual nics - would a distributed model not work better rather than a top heavy tier? - i.e. those predomently doing file transfers share the GBit switch with the file server and those gaming share the 100Mbit switch with the gaming server. Depending on the backplane capabilites of your switches you may find this a better solution (since file transfers are only going to hinder the ping times on your gaming rigs if they are all using the same uplink to the 'server' switch )
 
Darkstar850 said:
Well technically its LACP vs PAGP. They're both etherchanneling technologies. But they both do pretty much the same thing... most people don't use the features of LACP that it has beyond PAGP.

PAGP is also known as Etherchannel, and is a cisco only proprietary protocol. Etherchannel is not a generic term for link aggregation.

I thought the tigerswitch supported both LACP and etherchannel....

According to the datasheet on the 12 port SMC switch it only supports LACP.

Even if you guys don't want to spend anymore money and get 2 switches that both support LACP, I think you'd do fine with only 1 gigabit link between the 2.
 
alrox said:
PAGP is also known as Etherchannel, and is a cisco only proprietary protocol. Etherchannel is not a generic term for link aggregation.

I disagree, and so does Cisco. I read that article that you linked, and here are some things that support my position:

"Catalyst 6500/6000: Understanding the Link Aggregation Control Protocol section of the document Configuring EtherChannel"

"Catalyst 6500/6000: Understanding IEEE 802.3ad LACP EtherChannel Configuration section of the document Configuring EtherChannels"

"show etherchannel port-channel—Displays LACP port channel information, similar to the information provided by the show port lacp-channel "

There is no specific mention of Cisco's PAGP only meaning etherchannel.

Also, in my CCNP BCMSN text, both LACP and PAGP are discussed under the heading "Etherchannel Negotiation Protocols." There is another section a page later entitled "Configuring a LACP etherchannel."

Clearly, Cisco at the very least considers LACP and PAGP to both be etherchannel protocols, as I stated. Weather the IEEE uses this term or not, I am not certain.

Note: The Cisco article also refers to LACP port bundling as a form of trunking, so it looks like BrainEater was technically correct earlier. However, I stand by my statement that in general discussion, trunking is used to mean vlan trunking.
 
"Etherchannel" is a registered trademark of Cisco. A quick google search can prove that. It came out many years before LACP and for a time it was the only way to bond L2 links.

802.3ad(LACP) is an IEEE standard. Cisco supports both in newer switches. While they do the same thing, they are NOT the same.

It seems that the reason why the 'show etherchannel' command can also display LACP information is that when LACP was intergrated into IOS/CATOS it was just lumped into the same catergory as etherchannel because it was a true standard and did the same thing. Made it easier to have 1 command than replace it with 2 that basically are the same.
 
alrox said:
"Etherchannel" is a registered trademark of Cisco. A quick google search can prove that. It came out many years before LACP and for a time it was the only way to bond L2 links.

802.3ad(LACP) is an IEEE standard. Cisco supports both in newer switches. While they do the same thing, they are NOT the same.

I am aware of the origins of both. However, that does nothing to deny the fact that Cisco refers to both of them as etherchannel negotiation protocols. They both form etherchannels, they are simply different portocols do so so. PAGP was etherchanneling at first, because there was no other way to do so. Now there is a standards based alternative.
This is like saying that 802.1Q is not vlan trunking, because Cisco did it first with ISL.

Anyway, we are getting far off topic with this thread. I am not going to continue down this path so as to not further hijack the topic for discussion. If you want to continue to argue this point, you'll have to do it by yourself, I will simply to agree to disagree at this point.
 
cyberjt said:
Depending on the topology of your clients requirements and the capabilities of their individual nics - would a distributed model not work better rather than a top heavy tier? - i.e. those predomently doing file transfers share the GBit switch with the file server and those gaming share the 100Mbit switch with the gaming server. Depending on the backplane capabilites of your switches you may find this a better solution (since file transfers are only going to hinder the ping times on your gaming rigs if they are all using the same uplink to the 'server' switch )

Actually , both switches are gigabit.And I am starting to lean towards a more distributed topology....seems like a better way to go.

--------------------------------------------------------------------

The tigerswitch runs about 1100$....

you can bet our next switch will be both 'stackable' and LACP compatable......but in the mean time , I've got a lan to setup in a couple weeks.....
 
BrainEater said:
Actually , both switches are gigabit.And I am starting to lean towards a more distributed topology....seems like a better way to go.

--------------------------------------------------------------------

The tigerswitch runs about 1100$....

you can bet our next switch will be both 'stackable' and LACP compatable......but in the mean time , I've got a lan to setup in a couple weeks.....

At that price, you may seriously want to look at the Dell Powerconnect 2716 or 2724. They do VLAN's, link aggregation(802.3AD only) and jumbo frames for 25% the price of that tgerswitch.
 
holy....those ARE a lot cheaper.....

Thanks for the link !

I will definitly be looking into that ! They seem to have very similar specs.

:D
 
Holy crap those are really nice switches !

They do all the management I need , will port aggregate , and are cheap enough I can buy 2 and still save 500$.

Thank you again for the link !
You sir , are welcome at any of our lans , for free , until the end of time . :D
 
Ya, whoa is that tigerswitch expensive. No dilemma now, since you can get 2 switches with all the technology you need :D
 
Back
Top