POE and patch panels

Exavior

[H]F Junkie
Joined
Dec 13, 2005
Messages
9,700
I thought I would get some opinion from people that have done this before and would know of anything that I need to watch for / be concerned with.

I am preparing for VoIP phones to be roled out later this later and due to the current horrible design of our network i'm scraping everything and starting over. Current design is 10 24 port switches scattered around the building some on protected power to be up all the time, others just on normal AC. some runes follow standards for termination and while i've found and fixed a lot of them already some follow whatever random color order the person decided to follow that day. w brown, brown, w blue, blue, w orange, orange, w green, green sounds like a good order for today for all my runs :p some of it is shielded wire with unshielded ends... So just need to start over.

As part of my new design instead of having 1 switch feed another witch feeds another witch feeds another... I want to have a lot more runs back to a few central locations. Thought was put a bunch of patch panels around the building then fill them with preran connections back to patch panels by the switches and only have to worry then about new runs have to get to a patch panel and not a switch with reduces the need to worry about keeping them up and running during a power outage as the key locations of the fewer larger switches would have power at all times even during a power outage.

My only concern is do I need to worry about anything in regards to PoE and a patch panels? That will give me 3 places from the power source (switch) to the phone at the end that will be punched down into something (patch panel toward both ends then a jack at the very end near the phone). Do i need to worry much about lose from this as long as i make sure everything is punched down tightly? With how i have it planned I won't have any runs near the 300feet mark.More around the 100 - 200 feet range.

Plan for the switch side is a Cisco 4506E - dual 4600W power supplies, 7th gen management card for the main location. smaller 4503E for the second.
 
No, there's nothing to worry about with PoE and patch panels - absolutely no risks to consider. Stay with-in cabling standards and you'll be fine.
 
Ok, thanks for the reply. that is what I was thinking but wanted to check to make sure that I wasn't over looking some thing. Guess my planning part is done then, now to get management to approve $17000 for me to get the 4506e to get started on this. Planning the design and the rewiring will be the easy part, that will be the hard part ;)
 
Why the 4600?

Unless you need an enormous amount of bandwidth and feature set, you'll be able to do that fairly easily with stacked switches.

I actually just did this for a customer. They bought a pile of HP 5120 POE+ switches and used HPs IRF (stacking, but works at layer 3 too) to have a single device from a management perspective. The solution was less than 2/3 of the cost, and included lifetime warranty.
 
Guess there is no kill like over kill... We have a Dell 6248p stack and it works well with The VoIP deployment we have.
 
I have 4 6509-V-E with dual sup720s providing PoE for my VoIP.

Overkill? Not really. All of my user access switches are the same ... the only difference is the VoIP ones have the PoE linecards instead. Then, if there's a problem with one of the regular user switches, they can just switch one of their desktops to their phone port under their desk until the issue is resolved.

IMO, It also makes management a lot easier than having random things all around, but it's definitely pricier. Exavior might have a lot of Cat4Ks in prod already and he's sold on their reliability/feature set/being familiar with them ... if you got the budget for it, why change?
 
if you're properly deploying a bunch of switches in a stack, they'll be a single managed unit. One IP address manage the whole thing. Once setup, it's really no different to manage than a chassis switch.

The drawback of course is backplane. A 4500 has a 48Gb per slot backplane. You're probably not connecting a bunch of stacked switches together with 40Gig, you'll probably have 10Gb or maybe a ring, so 20Gb; either that or a dedicated stacking technology, which may be higher.

The point is, do you need that? If your network is all normal office workers running outlook and excel, you don't. Take a look at this graph.That's for this week's traffic to a network that spans 15 buildings and contains 2000 PCs. The graph is smoothed, so you don't see peaks for backups and whatnot, but you get the idea.

Buying more than you need is not particularly a wise idea politically or budget wise. In terms of budget, I'd much rather go to management and say "here's a new way of doing things, we've saved 20% and included a better warranty and some training/other upgrades." Politically, you don't want auditors or your company's outside accounting firm asking why the IT capital budget is so much higher than similar sized companies.
 
I was thinking it was a little overkill myself but I will fill up every port in the thing. So it's either a few larger switches or a bunch of stackable. It isn't like I'm getting these for 100 ports. Main building will need about 500 or more ports. Other two buildings only need about 48 each so they will just get stackables.
 
As far as anyone questioning the purchase I don't have to worry about that. We don't have any outside accounting firm or have anyone watching over our books. And it would be the highest up approving the purchase. will just keep me on par with my spending from last year. Last year was server upgrades long over due, this year will be the network.
 
looking through some of the other options that really isn't much cheaper. the dell 6248p with redundent power supplies and 3 year mission critical comes to $5200 each. So that means that the 5 i would need to equal the one 4506 comes to $26,000. I can get the 4506E and two 48port blades for $17,000 with 1 year warranty. Since not everything is POE, would be able to get away with one more 48 port that does POE then 2 normal 48 port gig, so probably about $10,000 for that. meaning it comes to the same price, now i would be having to pay the support again sooner so it would come out a little more during that 3 years.

HP isn't even going to be considered. They give us horrible support or the single HP server we have taking 2 - 3 weeks to get out to fix anything. So there is no way they are going to have any switches on our network.

Although in the end to keep cost down, wemight just end up keeping our GS-724s and moving them and keeping them in use for the data side of things, then just adding POE switches for the phone needs. I would prefer to have everything be one brand but i'm fine with any method that works. Doesn't change that i still need to rip out every last foot of wiring in all of our buildings and use it to strangle the people that were there before me. Although need to figure out how to run the new lines first and what color scheme i want to use.
 
looking through some of the other options that really isn't much cheaper. the dell 6248p with redundent power supplies and 3 year mission critical comes to $5200 each. So that means that the 5 i would need to equal the one 4506 comes to $26,000. I can get the 4506E and two 48port blades for $17,000 with 1 year warranty. Since not everything is POE, would be able to get away with one more 48 port that does POE then 2 normal 48 port gig, so probably about $10,000 for that. meaning it comes to the same price, now i would be having to pay the support again sooner so it would come out a little more during that 3 years.

HP isn't even going to be considered. They give us horrible support or the single HP server we have taking 2 - 3 weeks to get out to fix anything. So there is no way they are going to have any switches on our network.

Although in the end to keep cost down, wemight just end up keeping our GS-724s and moving them and keeping them in use for the data side of things, then just adding POE switches for the phone needs. I would prefer to have everything be one brand but i'm fine with any method that works. Doesn't change that i still need to rip out every last foot of wiring in all of our buildings and use it to strangle the people that were there before me. Although need to figure out how to run the new lines first and what color scheme i want to use.

HP was always awesome when I did warranty work for them up here in Canada. Call at 4 in the afternoon, next morning you would have the appropriate parts at 9:00 AM. I loved dealing with them.
 
HP was always awesome when I did warranty work for them up here in Canada. Call at 4 in the afternoon, next morning you would have the appropriate parts at 9:00 AM. I loved dealing with them.

Not for us. We are a small rural independant phone company / ISP in the northern part of Indiana. So our techs come from the Chicago area. a few times they couldn't get us one of them so sent one from South Bend.

Server we have is a intergery line (Itanium), every time i call for support what should be a simple things turns into a huge ordeal with them. Last issue was back in December while doing some stuff noticed that the battery on the RAID card was dead. contact them, takes them a week to look over logs, because it doesn't matter than the insight manager says the battery is dead, it could be reporting false information. :rolleyes: Chicago doesn't have the part so they have to send it to them, they call back a week later they got the part but wont' be able to make it out to us for at least another 4 days to swap it. calls that day and reschedules for a few more days outs. So by the time he finally made it out it was 3 weeks later. and that has happen a few other times where that was turn around time.

When we first got the server it would restarted a few times a day at random. Given that this is our billing server with all our customer data, gets to be a pain in the ass when it drops off in the middle of the day when somebody is on the phone with a customer or trying to process a service order. Every time they would look at the logs for about a week, tell us it was CPU0 and swap it out, month later it would start up again, they say it CPU0, they replace it.... this happens for about 12 months with them replacing the CPU, finally i just refused to let them come back to do anything unless they brought the voltage regulator for the CPU instead. They looked through logs for about a month or two before they finally decided it to try that. Finally stopped the rebooting. During that time had to explain why the $250,000 system upgrade (hardware & software) we just got crashed all the time and HP refused to do anything.

had a hard drive go bad once in that server, took 2 days for the replacement to make it to me. It is in raid 5 with a hot spare so not that big of a deal, but lost a harddrive in a Dell with RAID 6 with 2 hot spares and they were there in 2 - 3 hours with a replacement.

And even before i can get to that point i have to prove that i have warranty as every time i call and give them the service tag for the server they always say how it doesn't have warranty and never has, then i have to give them the service account number or whatever that is and they see that it is listed under that and that we do have support. And i've had them pull the same thing on the few HP laptops that we have, contact them and they tell us there isn't any warranty on the device, go around in circles for a few weeks then get them to realize we do have warranty.

But everyone's experience is going to be different. Some people might have great support with HP and shit support with Dell, others might be like me had have shit support with HP and great with Dell. Others might have great support with both and other have crap support with both.
 
I've seen that too where you'll have good support with someone and not with another. Everyone's mileage always varies.

I understand you have a pretty nice budget, but I guess just because I can doesn't mean I should.



oops. sorry should have read closer to your last post
 
Yeah, I know I shouldn't waste money for the hell of it. Most of my stuff is done on the cheap / free side. Its just the desire to do it right if I can get it approved instead of just making it work. I did the upgrade from 2924 switches to netgear as that was a cheap way to get gigabit through the building. But I have had a lot of times where I have missed some feature of a Cisco or similar switch. There are times when trying to find a rouge device that knowing which port that MAC address is coming from is helpful instead of just knowing it is somewhere. I had to go through unplug all the switches but one, start to see if I could still ping the device, then turn on one more switch try to ping again... But I dealt with that as the cost of those is $250 for a 24 port gig switch instead of the few $1000 for a Cisco switch. But when it comes to something that will be running our phone system I do want to make sure I'm doing a little more than just works. Make sure I can monitor everything I need to and track down issues as fast as possible. Plus since I'm moving to the central office and out of the IT department in the near future the ISP admins will be taking this part of my job and they are use to Cisco and have 6500s and stuff like that so it would be better for them to be given something just like their existing stuff instead of a data center full of Cisco then the building have a bunch of random stuff. Although it is more of a preference that anything. My only real need is redundant psu and mission critical support. Although I guess buying cheaper switches and buying 3 for every 2 I really need would be fine also. While hardware failure isn't extremely likely it is possible and I want to head of any issues down the road. I will find out this week what I can get approved as I will be making my presentation for what all is needed.
 
if you're properly deploying a bunch of switches in a stack, they'll be a single managed unit. One IP address manage the whole thing. Once setup, it's really no different to manage than a chassis switch.

That's not what I meant by harder to manage. Rather .. I have tons of 6500s. Now let's say I'll buy a Dell or HP stack for VoIP. That means: new hardware platform (which entails new code with new bugs, new limitations/nuances, new spare parts to keep around, new thing to learn/re-learn everytime there's a problem since you hardly use it, new support team, new account/sales team, and possibly separate network management software). If you use an existing product in your infrastructure, you can expect it to behave exactly like your other gear without dealing with any of the issues I mentioned. In my business, that ends up making our network more reliable, which means our business is more successful. To each his own, though.

The drawback of course is backplane. A 4500 has a 48Gb per slot backplane. You're probably not connecting a bunch of stacked switches together with 40Gig, you'll probably have 10Gb or maybe a ring, so 20Gb; either that or a dedicated stacking technology, which may be higher.

The point is, do you need that? If your network is all normal office workers running outlook and excel, you don't. Take a look at this graph.That's for this week's traffic to a network that spans 15 buildings and contains 2000 PCs. The graph is smoothed, so you don't see peaks for backups and whatnot, but you get the idea.

Off topic, but we're looking at 40gig/100gig right now. I already have some 40gig switches deployed. To say the least, 40/100gig is a giant headache. 40gig requires 8 MMF fibers, 100gig requires 20 MMF fibers. I'm thinking we're going to convert our panels to SMF because I see this as MMF dying and the MMF MPO connector as a dirty hack.

One more thing I'll mention -- I find daily bandwidth graphs to be absolutely useless. The problem with my industry is, packet loss is forbidden -- even just one packet. You can have tons of serious bandwidth problems that wouldn't show up with that resolution. At 10gig or above, most switches can't buffer more than a few milliseconds, which means you'll drop packets all over the place, but that 1 minute rollup graph won't show it. I have gear that sends off alarms if we cross a 7 Gbps mark at 10 millisecond interval, which reliably shows bandwidth capacity problems before we're dropping packets.

What most people don't understand is that 10gbps is a signal -- you can't send at "3Gbps". When a host is sending data, it is sending it at full 10Gbps -- but it may only do so for a few milliseconds. If another host is sending data at the same time, it will also do it at 10Gbps ... and if your uplink is 10Gbps that means the switch has to buffer and/or drop packets because it needs to send 20Gbps. Further, If a rollup graph shows "500Mbps" on a 1Gbps link, it doesn't mean the hosts were sending at a rate of 500Mbps ... it means they were sending at 1Gbps half the time and half the time they weren't. At 1 minute resolution, I'd already expect a lot of output drops on the link. By the time that graph would hit 800Mb - 900Mb ... they would already be complaining for months of periodic slowness.

This isn't meant to be an attack post -- just trying to share what I've learned.
 
I've never contacted HP server support, but their networking support has been great. The best thing is all their switches have lifetime warranties with no maintenance contract. Just call them up and they will cross ship a new switch. We also save money by buying all refurb switches.
At work we run all HP switches, with a few old cisco switches and routers that are being replaced soon.
Our network has 2 5400 Zl switches for the core, and about 10 2800 switches for distribution layer. (2 distribution layer switches per closet) All of our closets have fiber run to them, and each distribution layer switch connects back to both core switches.
Each closet has 6-10 2600 access layer switches that connect to both distribution layer switches in that closet.
Usually we buy all POE switches on the access layer. POE is not much more per switch.
For patch panels we use Panduit. They are expensive, however it is a reusable jack which is very handy sometimes. The same jack is used between patch panels, wall jacks, and different mounting boxes which is really nice.
Also HP has their PCM manager which can scan your switches, read / write configs and create network maps. It has a free and a paid version. The feature I use most in PCM is the node tracker, which can take IP or MAC and find what switch port it is plugged in.
 
Back
Top