How would you design this topology?

BigD1108

Limp Gawd
Joined
Nov 6, 2008
Messages
165
One of our clients is replacing some of their aging network components with 4 Cisco 2960S switches. Unfortunately in this case, my skills of switch configuration are greater than my skills of network design. I have a really crude network diagram of their basic network layout (4 servers, 4 switches, and a number of endpoints).

How would you experts design the physical connections in such a way as to facilitate some redundancy?

Feel free to ask with any questions if my terrible Visio skills don't convey all the information. Thanks in advance.

s5rBO.jpg
 
One of our clients is replacing some of their aging network components with 4 Cisco 2960S switches. Unfortunately in this case, my skills of switch configuration are greater than my skills of network design. I have a really crude network diagram of their basic network layout (4 servers, 4 switches, and a number of endpoints).

How would you experts design the physical connections in such a way as to facilitate some redundancy?

Feel free to ask with any questions if my terrible Visio skills don't convey all the information. Thanks in advance.

s5rBO.jpg

Here is a Virtual layout using Xenserver 6.0 and a Dell R710 You can also assign multiple network cards to the images.
topo1.jpg
 
I just gave you a more efficient and cost effective way of networking while bring them up to cloud networking. But you cannot understand that I do apologize
 
Because Dell PowerEdge R710s are chump change and your new-ish servers just don't match the color of the switches anyways... :rolleyes:
 
He's looking for redundancy with the equipment he has, not a whole new expensive setup.

OP - How much redundancy are you looking for? Does that include WAN redundancy as well? Power redundancy? Or just redundancy between switches?
 
how many servers are there?

do they have to be cisco switches?

you dont really need 4 switches, if your hardware is dieing that often you need a better power system, no point in having 4 switches with 2 NICs when a single motherboard could go, or power supply, or memory, or harddrive...

can you give us more details of the entire network?
 
how many servers are there?

do they have to be cisco switches?

you dont really need 4 switches, if your hardware is dieing that often you need a better power system, no point in having 4 switches with 2 NICs when a single motherboard could go, or power supply, or memory, or harddrive...

can you give us more details of the entire network?

Agreed. If you're looking for redundancy, a little more information on the layout of this network is needed. Also, four switches for four servers? :confused: Unless some of the switches are in another cabinet in a different location then that servers.
 
I just gave you a more efficient and cost effective way of networking while bring them up to cloud networking. But you cannot understand that I do apologize

What you gave him is a headache and 30 seconds of his life he'll never be able to get back. He asked how to utilize the 4 new switches they are getting not bail on his current setup to migrate to the "cloud".

Be part of the solution not part of the problem.
 
My god. The 2960-S switches have stacking capabilities. If you already have them, then use them. Why the hell would anyone suggest a completely new design??? Derp Derp Buy XenServer, build a citrix farm, use a script I stole from the internet and stuck my name on. How the hell is any of that actually helpful?

Anyway, just stack them up, then create etherchannels on the switches that span across two or more switches. Let's say you have two NICs in each of the servers, plug server 1, nic 1 into gi0/1 on switch1. Then plug server 1, nic 2 into gi0/1 on switch2. Server 2 nic 1 goes to gi0/2 on switch1, nic 2 goes to gi0/2 on switch2, etc etc etc. Configure teaming on the servers and etherchannels on the switches, there's your redundancy.
 
Careful guys. If you rag on sean too much he will tattle again to the mods and your posts will get deleted. :p

Looking at your design OP I don't see why you need more than 2 switches.
1 switch for your end points which can uplink via fiber or even stack with the switch for the servers.

@sean, I also fail to see how your design addresses the endpoints. Which you seem to have completely left out.
 
use 2 switches as your core

one link from each server to each core switch

servers configured for LACP

2 core switches connect to each other with 2 links and each core switch connects to the other 2 switches.

spanning tree on all switch ports. make sure you have it setup properly to send traffic on your redundant switch-switch links, even if someone loops the network.

only one firewall?
that's fine, link it to either of the core swtiches

test everything, document how it's done, and more importantly, why.

make backups, TEST them!! keep an offsite backup. test your backups periodically.

have network monitoring, including monitoring servers for low disk space, lack of memory, high cpu utilization, etc...

UPS, with environmental monitor for your server room. UPS should have enough run time so if the power fails, you have enough time to get alerted, dial in and shut things down manually. or ideally configured to do all the graceful shutdowns for you

camera with audio for server room you want to see if something crazy is going on, and hear things (like failed cooling fans or the AC not running)

it goes without saying that the server room is locked and keys are strictly limited to those who need to work on the servers/network. It should not be used as a work space or storage area. If you can hook it into your building security system with a door swipe, so you have audits of who goes in, that's even better.

ILO or other lights out style management for servers, on a separate network

make sure you can remotely dial in and access that separate network. in case something crazy happens with your production network, you want to dial in remotely and fix things.

configure group policy or your antivirus software to prevent people from running anything off USB sticks.
 
Thank you for all of the advice so far. I suppose some more information is in order as to the reasoning behind the number of switches, etc. There are 75 endpoints, so hence the need for 4 24-port switches.

The other components of the network (power management, cooling, network/environmental monitoring, backups, etc.) are already in place. This particular client is simply replacing their 4 aging Dell 3424p switches and I wanted to verify that best practices were followed in terms of switching redundancy, since it definitely wasn't considered when these switches were put in place. (long before we brought them on as a client).

We'll also be re-cabling their entire rack, since it is a complete mess at the moment.
 
Keep it simple. I'm assuming this is a single subnet with no L3 action happening on the switches.

Did you get the stacking modules with them? If so and if they're in the same rack I'd just stack them. Servers with LACP connections spanning switches. or smart load balancing/failover.

If not I'd create 2gig port channels starting at the top going between switches. Then a 1gig trunk from Switch4 to Switch1 for redundancy. Spanning tree will block the 1gig. Or make the Sw4-Sw1 connection 2gig and let spanning tree sort it out. You could also control what gets blocked via spanning tree costs.

Switch1---2G---Switch2---2G---Switch3---2G---Switch4
|__________________1G______________________|

For the servers I'd split the nics between the switches, probably Switch2 and Switch3. Then use the broadcom or intel utility to do smart load balancing/failover. I don't think these switches support LACP spanning across two switches unless stacked.

Plug in the firewall wherever. I'd probably use Switch2.
 
I think 4 switches is the max in a 2960S flexstack. Would have been better if the endpoint switches were 48 port-ers. This is going to cause near-term growth problems.

Using flexstack is definitely the most optimal way, but the 2960S's do have severe limitations. Namely, 6 LACP groups maximum. Combined with your endpoint count and port-count, you can't really make anything redundant.

If you can get flexstack cards, get em.

otherwise, you'll have to stack then with LACP and you can't afford more than 2Gb of interswitch bandwidth simply due to lack of ports. Do what Avey said except I'd keep them all the same, even if spanning tree is going to make one inactive. Personal choice.

Each switch will have 4 available etherchannel groups after that. Do what you can for splitting the servers up. But with 75 endpoints and only 86 available ports....

A 3750X setup would probably have been more optimal, but obviously a bit pricier. I would have had them get 2 48 port 3750X's with the base licence (nothing fancy there, right?). Stackwise and stackwise plus meant you wouldn't have to worry about power supply taking down the stack, and adding PoE or more ports in the future would have been a breeze, along with as many LACP groups as you pleased.

//edit: oops thread rez.. fuck.
 
Back
Top