Cisco's 3 layer network model - Do I need it?

Joined
Oct 17, 2004
Messages
41
Greetings,

I'm redesigning my network because it's currently a tangled pile of crap. "Best practice" says to use Cisco's three layer network model but I'm wondering if I need that.

My network is as follows:

8 x 48 port POE user access switches in 3 different closets.
7 x 48 port access switches in my data center serving up about 170 servers.

The 3 layer model says I'd have a core switch going to 4distribution switches (3 closets plus data center) then to the respective access switches. While I understand the design it seems a bit over engineered. All of the traffic will be from the users to the data center with 32 remote offices talking to the data center as well. The remote offices hang off a MPLS cloud and would combined do only about 30 Mbps total.

So, to save a hop on the network, a few $$$$$ couldn't I run trunks back from the access switches to a single, large core switch?
 
Just do a collapsed core. Get some switches with lots of fiber ports to act as the core/distro and run everything back to them.

Single core switch? I'd advise against that part.
 
If you use a single switch for your core you will have a single point of failure.

The Cisco model is designed to give you redundant paths to allow for failure.

You need to base your logical design around your physical requirements, just account for some basic redundancy.
 
The 3 layer model says I'd have a core switch going to 4distribution switches (3 closets plus data center) then to the respective access switches. While I understand the design it seems a bit over engineered.


That's the point; it is engineered for maximum performance and scalablity. If you don't need that, you can combine your core and dist layers into a pair of switches (no single point of failure)
 
This is why a core/three tier is important:

223680.jpg


If you don't have a lot of blocks/physical devices, it's not a necessity.
 
Thanks for the replies!

Budget is a (major) concern and I understand a single core switch is a single point of failure. The finance gods understand this as well and will allocate budget next year for another core switch so we can do dual path to the access switches.

Would a single core switch with enough mojo such as a Cisco 6500 series be up to the job?
 
Depends what model of 6500. The chassis itself (outside of the later E revisions) is basically just a passive backplane that hasn't changed much in 10 years. The newer supervisor modules can move a lot of bits, but given your requirements you could probably get away with using a Sup32.

Something you might consider is Nexus 7000. Cisco is practically giving them away right now in the bundles, and you can use fabric extenders (FEX) as opposed to edge switches which brings the price down dramatically. That is the model Cisco is pushing right now for the datacenter. I believe a single supervisor N7K is selling for about 80k list, which is a steal. The FEX devices are extremely cheap and great for top of rack devices.
 
The remote office traffic is negligible. What does your main traffic spike at? I ask because a Nexus or 6500 seems like overkill to me, based on what you have described so far. A pair of 3750's with SFP modules would probably meet your needs.
 
Depending on your usage you could save some money by using 3750 fiber switches as your "core". Just get licensing for ipservices and not base ios. You can stack them if yoy need more than 12 fiber ports. The most basic way to get redundancy is to have two core switches, each with one trunk uplink to your cable closets. Then Setup hsrp between the two core switches.
 
If you're going the 3750 route for your core, don't use a single stack. Make two separate stacks.
 
two cheaper core switches > one expensive core switch

This is too broad, which makes is inaccurate. Using your logic, 2xNetgear switches > 1xNexus 7K with dual sups.

He doesn't have to buy Cisco, but you're hard pressed to find another vendor with the stability, features and support that Cisco offers.
 
This is too broad, which makes is inaccurate. Using your logic, 2xNetgear switches > 1xNexus 7K with dual sups.

He doesn't have to buy Cisco, but you're hard pressed to find another vendor with the stability, features and support that Cisco offers.
Sorry I should have been more clear. I was trying to make a point in terms of cost. If you can get away with two 3750Gs instead of buying a 6500 or a Nexus, why not try?

You don't think Juniper is good enough for core layer?
 
Sorry I should have been more clear. I was trying to make a point in terms of cost. If you can get away with two 3750Gs instead of buying a 6500 or a Nexus, why not try?

You don't think Juniper is good enough for core layer?

Sure, Juniper would be one of the few vendors I'd be okay with in the core. There is a strong lack of familiarity with JunOS in the (enterprise) networking community though. If I were starting a greenfield build right now, I'd definitely look at Juniper.

One thing to keep in mind though, list price shouldn't be the determining factor in hardware purchases. You have to take many things into account, like whether your engineers will have a steep learning curve on any new gear, how effective support is and what it costs, etc.
 
within the data centre we run a pair of 6500's in a vss, with 4x 10gbe vsl's between them for our core. each cab (approximately 40) has a pair of 3750 switches (stacked) installed. each stack is connected to the core with a 2gbe trunk. everything here is fibre. blade centres are connected directly into the core via flex10 interfaces, and standalone physical servers are connected into the 3750 switches. our wan is provided by a pair of 7200 routers in a hsrp configuration which go off to our mpls cloud (45mb wan, 30mb internet) to connect with several hundred edge sites. we don't need a high speed wan as we are heavily thin client based. it works well for us. the only thing that throws the occasional spanner in the works is that the perimeter firewall cluster is within the mpls cloud as opposed to being within the data centre. we have data centre firewalls too, in order to optimise inter-dmz traffic but if we were to start again i would prefer the perimeter to be within the data centre personally.
 
Not sure if this is out of scope for the thread, but having a design discussion could be cool.

Overall, you have a pretty solid design. I have a couple issues with it though:

1. Not a fan of VSS. I've seen some stupid issues on core/distro VSS pairs that take down an entire network. The shared control plane bugs me. vPC on Nexus is a much cleaner way of doing it, IMO. vPC accomplishes the same thing, but keeps the control planes separate (two brains versus one). You lose the ability to manage both chassis as a single unit, but that's part of the upside for me.

2. 3750s are suboptimal server/data center switches. They have ridiculous buffers which leads to a fair to excessive amount of dropped traffic. 4948s are superior options (if you're not directly connecting to 6500s with 6700 cards, or using a Nexus solution), IMO. You lose the ability to stack, but, again, that's a good thing IMO.
 
to the best of my knowledge we've never experienced any issues with the vss, maybe we have been lucky - i couldn't say, but it works well for us. in terms of the 3750's, anything of any business criticality tends to be run within the blade/flex10 environment - either dedicated or virtualised within a hypervisor, the 3750's are there more to provide basic copper connectivity for 'low-rent' application servers, auxiliary devices, and things not requiring huge bandwidth. our organisation is heavily into mergers and acquisitions so our data centre has lots of spare capacity in it, racks are typically only 50% populated simply because we could potentially get a call to say we're doubling the size of the user base almost overnight and we have to be in a position to react quickly. overall it provides us with a solid data centre network, although the design is several years old now. i can't see the overall structure changing, but sure - as time moves on we may well end up replacing kit for more modern equivalents.
 
Back
Top