Cisco Porn - 4507

iaintmost

n00b
Joined
Oct 27, 2009
Messages
11
After working with junk Dell PC3448POE's as my switch core (2 stacks of 3) for the last 2 years, I finally have them coming out and my 4507 Dual PS/Dual Sup chassis going in.

Unfortunately the Sup's are on back order until 12/1, but I got the chassis and power supplies today and should be getting the POE Line Cards here shortly also.

I'll update as I receive the parts. Now I have to decide if I face the cards/cable side out or the pretty Cisco side out.

http://gallery.me.com/iaintmost#100043
 
Last edited:
Nice. I'm a big fan of the 4500s.

Pun? :D


Yah, me too. For the price I like them better than our 6500's. 'Damn expensive things.

My biggest complaint about the 4500 series is the over-subscription on the original gigabit 48-port modules (WS-X4548-GB-RJ45). Those are 8 to 1. Make sure you know that if you plan on doing server aggregation.
 
Pun? :D


Yah, me too. For the price I like them better than our 6500's. 'Damn expensive things.

My biggest complaint about the 4500 series is the over-subscription on the original gigabit 48-port modules (WS-X4548-GB-RJ45). Those are 8 to 1. Make sure you know that if you plan on doing server aggregation.

I got the WS-X4648-RJ45V+E. :D

From Cisco:

WS-X4648-RJ45V+E:

• 24 gigabits per slot capacity

• 48 ports

• 10/100/1000 module (RJ-45)

• Cisco IOS Software Release 12.2(40)SG or later

• IEEE 802.3af and Cisco prestandard PoE, IEEE 802.3x flow control

• Bandwidth is allocated across eight 6-port groups, providing 3 Gbps per port group (2:1)

• L2-4 Jumbo Frame support (up to 9216 bytes)

• Capable of up to 30W of inline power per port

• Enterprise and commercial: designed to power next generation IP phones, wireless base stations, video cameras, and other PoE devices

• Campus and Branch applications requiring enhanced performance for large file transfers and network backups

I don't believe we do data/server aggregation though.
 
Last edited:
4500s are pretty excellent, and when I needed just a smallish one, the II-Plus_TS supervisor for a 4503 is super useful!
 
i'll take that off your hand if you don't need it :D that thing looks so beastly, if only i could ever afford a NEW piece of cisco equipment without having to go broke for a year.
 
Very nice dude, good call on specing the RIGHT PSU as well. If only my 4006 had a nice inline power source. no PoE for me without the stupid PEM. What sups are you going with? RPR should be fun but there are some NASTY bugs along with it.
 
I've got a 4510R, dislike, too oversubscribed, and no good way to tie two of them together, wouldn't be enough bandwidth :/

time for 6500's or something maybe
 
I've got a 4510R, dislike, too oversubscribed, and no good way to tie two of them together, wouldn't be enough bandwidth :/

time for 6500's or something maybe
no way to tie them together? I think a few mulit-10Gbps port channels sounds pretty good to me. 2x10Gbps, one 10Gbps port on each ASIC group so you have no oversubscription at the card nor chassis fabric.

This switch isn't meant for a core switch really, though many smaller shops use it for just that(medium sized office). Your 6500's, that you want to "tie together" are going to cost you so much money you're going to have to cry yourself to sleep(VSL capable card), compared to the 4500, that is if you were referring to the vss capability.
 
just went VSL/VSS on ours (and many more to come).

bleh,
with nortel's smlt - i could have this capability even on a 1U edge switch.
 
no way to tie them together? I think a few mulit-10Gbps port channels sounds pretty good to me. 2x10Gbps, one 10Gbps port on each ASIC group so you have no oversubscription at the card nor chassis fabric.

This switch isn't meant for a core switch really, though many smaller shops use it for just that(medium sized office). Your 6500's, that you want to "tie together" are going to cost you so much money you're going to have to cry yourself to sleep(VSL capable card), compared to the 4500, that is if you were referring to the vss capability.

we're looking at going 10G for our production ESX cluster. We're not really a small shop, but not really large either. a few hundred servers if you count VMs. Now that we've moved to a gigabit WAN connection, the horrible oversubsciption of several of the blades I feel may begin to hinder us soon. The cost really isn't that big a deal, won't hurt any more than the new NetApp cluster, ouch.
 
we're looking at going 10G for our production ESX cluster. We're not really a small shop, but not really large either. a few hundred servers if you count VMs. Now that we've moved to a gigabit WAN connection, the horrible oversubsciption of several of the blades I feel may begin to hinder us soon. The cost really isn't that big a deal, won't hurt any more than the new NetApp cluster, ouch.
Oversubscription isn't really a problem is you properly design the infrastructure, again.. know which port groups there are per ASIC.

Also, is that gig connection copper(metro, and policed at 1gbps?) or sonet? Thats some pretty crazy bandwidth for a "not really large" shop :D
 
Oversubscription isn't really a problem is you properly design the infrastructure, again.. know which port groups there are per ASIC.

Also, is that gig connection copper(metro, and policed at 1gbps?) or sonet? Thats some pretty crazy bandwidth for a "not really large" shop :D

Heck yea. My university has about 20k students + staff and we only have a gigabit connection
 
but then you would have to use a Nortel switch! :p

that's what im saying - coming from a mostly nortel shop on the LAN/data center side - i was doing all this stuff 6-7-8 years ago and for cheap!

now im at an all cisco shop, and we're just beginning to roll out VSS and it's somewhat laughable the differences ... and the actual cost/components you need to convert your 6500s to VSS.
 
Oversubscription isn't really a problem is you properly design the infrastructure, again.. know which port groups there are per ASIC.

Also, is that gig connection copper(metro, and policed at 1gbps?) or sonet? Thats some pretty crazy bandwidth for a "not really large" shop :D

our old 100Mb WAN was copper, new one is direct fiber connection. we're literally 5 floors above our ISP.
 
that's what im saying - coming from a mostly nortel shop on the LAN/data center side - i was doing all this stuff 6-7-8 years ago and for cheap!

now im at an all cisco shop, and we're just beginning to roll out VSS and it's somewhat laughable the differences ... and the actual cost/components you need to convert your 6500s to VSS.
Actually, you could have done stackwise w/ multistack channels over 6 years ago with the 3750's for cheap too :D. VSS is considerably more expensive for very obvious reasons. Is VSS new, sure... is the technology new.. absolutely not.

Bergo, Are they really giving you line rate and not policing you?
 
Last edited:
Bergo, Are they really giving you line rate and not policing you?

our bandwidth isn't throttled by our ISP. We're charged for our 95th percentile of usage. it is advantageous for us not to make use of the bandwidth available to us by doing some traffic shaping. Can we technically make sure of the full 1000Mb connection? yes. do we want to pay to do so? Hell NO! haha.

The equipment we have on the edge doesn't even make use of the bandwidth that is technically available to us. We have 7204VXR's with NPEG2's.
 
Actually, you could have done stackwise w/ multistack channels over 6 years ago with the 3750's for cheap too :D. VSS is considerably more expensive for very obvious reasons. Is VSS new, sure... is the technology new.. absolutely not.?

stacking switches != 2 physical chassis bundled as 1 logical, while maintaining separate data and control planes.

you're comparing stacking switches with a proper L2 link aggregation across multiple switches?
 
stacking switches != 2 physical chassis bundled as 1 logical, while maintaining separate data and control planes.

you're comparing stacking switches with a proper L2 link aggregation across multiple switches?
obviously you have no idea what stackwise technology is, Im not talking about gigastack here. Creating a stack with stackwise will create one "logical chassis" controlled by a stackwise master(same as VSS in a sense). You can perform multichassis link aggregation the same as you could with VSS with stackwise. Perhaps you should try it, ive used it on multiple occasions to create multichassis port channels.
 
we're looking at going 10G for our production ESX cluster. We're not really a small shop, but not really large either. a few hundred servers if you count VMs. Now that we've moved to a gigabit WAN connection, the horrible oversubsciption of several of the blades I feel may begin to hinder us soon. The cost really isn't that big a deal, won't hurt any more than the new NetApp cluster, ouch.

Couple Cisco Nexus 5020s for your VMware farm and put CNAs in each server. Two cables...20Gb to each. If you're using Fibre Channel do it with Fibre Channel over Ethernet and consolidate fabrics.

Have a few of those projects going right now.
 
obviously you have no idea what stackwise technology is, Im not talking about gigastack here. Creating a stack with stackwise will create one "logical chassis" controlled by a stackwise master(same as VSS in a sense). You can perform multichassis link aggregation the same as you could with VSS with stackwise. Perhaps you should try it, ive used it on multiple occasions to create multichassis port channels.

and are there separate control planes on each switch of the stack?

and stacking via the backplane/cables in the back - how are you supposed to have switches in the same 'stack' in separate physical data centers???
 
Back
Top