Set Cisco Router Bandwidth/QOS/Rate Limit

Quikstrumental

Limp Gawd
Joined
Oct 18, 2005
Messages
336
Hey guys,

I apologize in advance if this is a novice inquiry, but our company just switched from Point-to-Point T1's to Metro Ethernet.

On one point-to-point, from our main office to one of our high profile locations, we had two bonded T1's. Now this site has a 3 Mbps Metro-E link, but it's being over-saturated. I don't know what type of QOS implementation our T1 provider had, but it prevented flooding. Now, I'm getting horrendous latency as the office peak hours approach since there is no QOS on the mesh by our Metro-E providers.

Ultimately, my question is: what's the best way to set a FastEthernet port on a Cisco 1800 series router to limit all bandwidth to 3 Mbps? At the moment, I don't have a preference in which traffic takes priority.

I tried the rate-limit command, along with a speed calculator I found online, but that slowed the network down immensely. Thanks in advance, and feel free to ask any questions for information that I may have left out.
 
You should be utilizing DSCP QoS algorithms. I will get back to you if I have some time later to offer advice.
 
Hey guys,

I apologize in advance if this is a novice inquiry, but our company just switched from Point-to-Point T1's to Metro Ethernet.

On one point-to-point, from our main office to one of our high profile locations, we had two bonded T1's. Now this site has a 3 Mbps Metro-E link, but it's being over-saturated. I don't know what type of QOS implementation our T1 provider had, but it prevented flooding. Now, I'm getting horrendous latency as the office peak hours approach since there is no QOS on the mesh by our Metro-E providers.

Ultimately, my question is: what's the best way to set a FastEthernet port on a Cisco 1800 series router to limit all bandwidth to 3 Mbps? At the moment, I don't have a preference in which traffic takes priority.

I tried the rate-limit command, along with a speed calculator I found online, but that slowed the network down immensely. Thanks in advance, and feel free to ask any questions for information that I may have left out.

I set up QoS policies every day for Metro-E customers. We never use rate-limit.

What you need to do is set up QoS and then put a shape average 3000000 statement under the class-default. This will shape traffic to 3 Mbps passing through the service policy. Alternatively, you can create a specific shape policy and apply it under class-default.

As for DSCP, you only should be using match DSCP statements if your traffic is arriving to the device pre-tagged. If not, you'll be matching based upon ACLs you apply under each class-map. If you're passing off that QoS traffic to your provider's MPLS, you should also include set DSCP statements to ensure that you're tagging the traffic heading out from the router so it can be caught by your provider's QoS policies. (You should also call them to ensure they're using a similar QoS configuration and they're matching the right DSCP values).

EDIT - I checked back on this thread and realized I'd never explained why it worked with T1s, and why we put the shaper in class-default. With T1s, whether a single serial link or a multilink, the router knows how much bandwidth is available to allocate to the QoS policy (it simply adds 1.536 Mbps times the number of serial links... or, to be more accurate, 64Kbps times however many TDM channels there are in aggregate). By default, it's 75% of the bandwidth of the T1/Multilink. This is usually changed with a max-reserved-bandwidth 100 statement on the T1/Multilink to allow the QoS policy to use all of the bandwidth of the T1/Multilink. With Metro-E on the other hand, QoS will attempt to use all of the bandwidth of the ethernet port that the QoS policy is configured outbound on, and usually the Metro-E link isn't that fast. Hence the shaper to control the traffic so it doesn't spike and hit a policer somewhere that's going to just drop the excess traffic.

As for why you put the shaper under the class-default, you don't need a shaper to work until the circuit starts to max out, and when it does, you want to be shaping traffic out of the best-effort queue, not your priority queues. If you're dropping all of your best-effort traffic and starting to encroach upon your priority queues, QoS will begin dropping the priority traffic anyhow.
 
Last edited:
Thanks, Electro!

Could you shed some more light on the commands?

I have created a policy-map with a class-default shape average 3000000, but I am only able to apply it to the interface as an output policy only. Is this correct?
 
Electro, thank you very much for your suggestion!
I've applied it, and now the latency has stabilized and lowered with the speed average. I noticed it fluctuates and goes higher at times, but lowers back down pretty quickly.

Kalthak, thanks for confirming that it was output only. :D
 
Ahh! Looks like the latency started up again, and I kept getting dropped packets.
What's the difference between the shape average command, and traffic shape command?

Any suggestions?
 
Ahh! Looks like the latency started up again, and I kept getting dropped packets.
What's the difference between the shape average command, and traffic shape command?

Any suggestions?

Shape average allows no excess burst.

Where are the packets being dropped from?

If you consistently exceed your available bandwidth, you will drop packets, regardless of a shaper. A shaper will only prevent bursty traffic from getting dropped by queueing it until the transmit ring shows availability. If that doesn't happen, traffic will get dropped.

But doing it the way I described will at least cause only the best-effort traffic to get shaped and dropped. If you max hard enough though, and if there's a lot of priority traffic going across the link, you'll start to see priority traffic drops.

If you're seeing regular priority traffic drops, you basically have three choices: move some of your priority traffic to best-effort to preserve what is truly important, reduce circuit utilization, or get more bandwidth.

EDIT - I saw something the other day that made me chuckle. A guy showed me his QoS policy; it was a 99% EF-only queue, and he had an access-control list with all sorts of crap (Citrix, Voice, RDP) tagging his traffic as EF. He was complaining that he was seeing EF drops. A brief look at his class-default showed very few matches; he had prioritized virtually all of his traffic, essentially making it all best-effort. :)
 
Last edited:
Shape average allows no excess burst.

Where are the packets being dropped from?

If you consistently exceed your available bandwidth, you will drop packets, regardless of a shaper. A shaper will only prevent bursty traffic from getting dropped by queueing it until the transmit ring shows availability. If that doesn't happen, traffic will get dropped.

But doing it the way I described will at least cause only the best-effort traffic to get shaped and dropped. If you max hard enough though, and if there's a lot of priority traffic going across the link, you'll start to see priority traffic drops.

If you're seeing regular priority traffic drops, you basically have three choices: move some of your priority traffic to best-effort to preserve what is truly important, reduce circuit utilization, or get more bandwidth.

EDIT - I saw something the other day that made me chuckle. A guy showed me his QoS policy; it was a 99% EF-only queue, and he had an access-control list with all sorts of crap (Citrix, Voice, RDP) tagging his traffic as EF. He was complaining that he was seeing EF drops. A brief look at his class-default showed very few matches; he had prioritized virtually all of his traffic, essentially making it all best-effort. :)

For educations sake, can you show some examples of a policy map or maybe link to your favorite document you use for reference/teaching? I've really never done much with QoS and I would love to learn more to test limit a few things at home and see how it works out. I want to see how things will work out with usenet and me limiting things on my end :)
 
For educations sake, can you show some examples of a policy map or maybe link to your favorite document you use for reference/teaching? I've really never done much with QoS and I would love to learn more to test limit a few things at home and see how it works out. I want to see how things will work out with usenet and me limiting things on my end :)

I learned it by doing it, unfortunately. I would direct you to a favorite website of mine, introduced to me years ago by some networking guys on this very forum, firewall.cx, but oddly, I don't think they have an article on configuring QoS. Still, I recommend opening that website in another tab and checking it out some time.

So QoS works in a Cisco router in a few pieces. First, you need something to define the different classes of traffic you would like to have. Thus is born the class map:

Code:
class-map match-any QOS_REAL-TIME
 match ip dscp 46 [I](alternatively match ip dscp ef)[/I]
class-map match-any QOS_CRITICAL-DATA
 match ip dscp af41 [I](alternatively match ip dscp 34)[/I]
 match access-group name IMPORTANT-SHIT

Now you can see the first class-map for real-time traffic is set to match packets tagged with DSCP value 46, aka EF (Expedited Forwarding). Usually this is voice traffic, and often the tagging is done by the voice equipment, or IP PBX.
The second for critical data matches anything arriving pre-tagged as DSCP value 34, aka AF41. It also matches anything that makes it through the access-group IMPORTANT-SHIT.

Code:
ip access-list extended IMPORTANT-SHIT
 permit ip any any eq citrix-ica
 permit udp any any range 6112 6119 [I](StarCraft is serious business!)[/I]

So now we see that the access-list IMPORTANT-SHIT is permitting traffic that is coming through as Citrix traffic, or UDP on Battle.Net ports. We don't want the network admin's StarCraft sessions lagging, now do we?

Okay so now we need something that will actually govern these classes of traffic we've defined. Here comes the infamous policy-map.

Code:
policy-map 75r_24c
 class QOS_REAL-TIME
  bandwidth percent 75
 class QOS_CRITICAL-DATA
  bandwidth percent 24
  set ip dscp 34
 class class-default
  shape average 1000000000 [I](One [B]Beeeellion[/B]... bits!)[/I]

Alright, now we've got the meat of it put together. Standard practice at work is to name the map based upon the bandwidth percentages used but you can do whatever. So we define each class, and underneath we allocate a percentage of the bandwidth we want to guarantee that traffic.

I've also got a set ip dscp 34 statement under my CRITICAL-DATA class. I'm tagging that traffic because while I was matching AF41 (DSCP 34) traffic, that Citrix and StarCraft traffic wasn't coming to me pre-tagged. I'm tagging it now so that when I hand it off to my provider, they can see it come across tagged and match it (of course, this is assuming I've called them and had them set up QoS on their side). This way they don't need to build any ACLs. Besides, they don't need to know that I've got StarCraft in that priority queue. ;)

At the bottom I've got my shaper, shaping traffic heading out of the interface to a mere 1 Billion bits (though I'm guessing a normal Cisco router would give me a hard time for this when I try to apply it to a FastEthernet port at 100 Mbps! :p) Also note that you can set up a separate shape-policy and apply it instead of the shape-average statement I used above. The shaper isn't necessary on Serial links since the router already knows the max bandwidth of the circuit.

IMPORTANT NOTE: instead of using the bandwidth statement you can use a priority statement (ie priority percent 75). In addition to guaranteeing bandwidth, priority actually delays other traffic to allow the priority traffic to the front of the line heading into the transmit ring (buffer) of the physical interface the policy is applied to. This is important when handling voice and traffic VERY sensitive to latency or jitter. However, while with the bandwidth statement, any available bandwidth by a queue with spare bandwidth gets allocated to other queues that need bandwidth, priority does not do this. So if my CRITICAL-DATA queue has spare bandwidth and my REAL-TIME queue does not, with the bandwidth statement the REAL-TIME queue will be allocated some of CRITICAL-DATA's free bandwidth. With priority statements this will not happen; 75 percent is all REAL-TIME will ever get when the circuit is congested.

We're not done though! We have to tell the router which interface QoS applies to. Generally, QoS is applied on an outbound interface.

Code:
interface FastEthernet0/0
 description Connection to a Series of Tubes
 ip address 123.123.123.123 255.255.255.0
 service-policy output 75r_24c

...aaannnd there we have it! QoS is now configured for this router.

DISCLAIMER - My Cisco lab is out of commission at the moment so I did this from memory and a few quick peeks at THIS doc. I've probably mistyped something so if I have, thats my bad. Hope this helps! Gonna take this post and throw it on my blog and call it a day. :)

EDIT - Oh, right! I forgot to hit upon max-reserved-bandwidth! With a serial interface the T1 knows the bandwidth capacity of the link, but by default only allows 75% of it to be used by a QoS policy. To change this, you use a max-reserved-bandwidth 100 applied to the serial interface to allow the QoS policy to use all of the available bandwidth. It's not unusual for me to find QoS policies set up and applied but completely inactive within a router because the QoS policy allows for more bandwidth to be used by the policy than the 75% default can provide. A show policy-map int Serialx/x will usually reveal this. In addition, standard practice at my workplace is to only allow the QoS queues to add up to 99%, not 100. The reason for this is to prevent the QoS policy from smothering traffic like keepalives or route advertisements.

EDIT2 - A couple other quick points. In an ADTRAN, QoS is set up almost exactly the same with a few minor changes in syntax, and the fact that the class-map is actually merged within the policy map (which is called a "qos map"). Additionally, in Ciscos, policy maps can actually be nested, much as how I described a shaper policy can be nested within class-default under the main QoS policy. This is often done when applying QoS to traffic from multiple VLANs. Another thing worth mentioning is the random-detect statement you may sometimes see under class-default in QoS policies. This causes the best effort queue to drop random packets when congestion occurs, which can cause TCP retransmits and thus lowered TCP window sizes which effectively throttles down TCP traffic to reduce bandwidth maxing. Random-detect drops can be weighted based upon DSCP values and the drop rate can be adjusted as well. My company doesn't use this statement as we rely upon shapers to throttle traffic and our customers to decide if they want to enforce limitations upon their TCP streams.
 
Last edited:
I'm glad I asked cause holy shit thats a great. Thats the best domination I've seen of that material (been digging into qos books). Thanks electric, you sir rock!
 
Great write-up, Electro!


If you're seeing regular priority traffic drops, you basically have three choices: move some of your priority traffic to best-effort to preserve what is truly important, reduce circuit utilization, or get more bandwidth.


Since the site I'm experiencing issues with is high-profile and high utilization, we're going to move to a single 10 Mbps-provisioned link.

If I set the fast ethernet port to 10 Mbps, will this auto queue or something similar, once the line gets pegged?
 
Great write-up, Electro!





Since the site I'm experiencing issues with is high-profile and high utilization, we're going to move to a single 10 Mbps-provisioned link.

If I set the fast ethernet port to 10 Mbps, will this auto queue or something similar, once the line gets pegged?

It's funny, I ran into a ticket earlier where I found a WAN-facing FastEthernet port set at 10 Mbps, but also had a 10 Mbps shaper applied. I sat there thinking for a minute about whether this would be a problem. Ultimately I decided it would, as a hardware bandwidth limitation is going to result in more traffic being dropped when the circuit maxes out than if just a 10 Mbps shaper is employed.

Ultimately a shaper is going to delay traffic and try to make it conform to the 10 Mbps limit, and will do so by delaying packets and timing transmits during dips in bandwidth utilization so as to promote a steady flow of data across the circuit. Just using a negotiated bandwidth on the circuit will simply result in packets being transmitted when they can and any packets that don't fit into the hardware transmit queue being dropped.

I guess the simplest way to describe is that a shaper is simply more sophisticated. It's designed to deal with congestion scenarios by "filling gaps" in traffic, whereas an interface just sends traffic as fast as it can in the order it is received, buffers what it can't fit down the pipe, and drops any excess. In short, it deals poorly with sustained congestion, though even a shaper can only do so much.
 
So, would you recommend shaping a 10 Mbps line to 9, to have a bit of leeway on high traffic times? Or is that just productive at all?
 
So, would you recommend shaping a 10 Mbps line to 9, to have a bit of leeway on high traffic times? Or is that just productive at all?

No, I wouldn't. That will just mean you're denying yourself 1 Mbps of data and your circuit will max out that much harder for it. A shape-average will cap traffic at the rate you specify by delaying traffic that exceeds bandwidth and trying to transmit it when bandwidth is available. Reducing your available bandwidth will only hurt you and result in more dropped packets.

Your service provider will be shaping or policing (which is essentially just dropping all traffic above the provided rate) that traffic at 10 Mbps. Your provider may allow for burst excess above 10 Mbps, but you'll need to check the details of your service.

Your best bet may be to call them up and ask them to email you their shaper config so you can duplicate it in your router. If they're policing, we just need to set up a shape-average at that rate. If you're still dropping packets... then we need to start thinking about increasing bandwidth or reducing circuit utilization.
 
Probably the best link for QoS on Cisco is this one:

http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/QoSIntro.html

It is lengthy but a lot of good info in there on how QoS works and how to structure your policies.

The only thing I'd add to electrofreak's qos policy is that generally you don't want your realtime/priority queue to be more than 33%. More than this and your other traffic can start to suffer, it will sit in queues too long and start to time out, or be denied service totally.
 
Probably the best link for QoS on Cisco is this one:

http://www.cisco.com/en/US/docs/solutions/Enterprise/WAN_and_MAN/QoS_SRND/QoSIntro.html

It is lengthy but a lot of good info in there on how QoS works and how to structure your policies.

The only thing I'd add to electrofreak's qos policy is that generally you don't want your realtime/priority queue to be more than 33%. More than this and your other traffic can start to suffer, it will sit in queues too long and start to time out, or be denied service totally.

I'd actually disagree with that. If you use a bandwidth statement instead of a priority statement (as I did in my example), available bandwidth in one queue can be shared with other queues that are congested, and thus you can make a 90% EF queue and your other traffic will not suffer unless your EF traffic is actually taking up 90% or more of your bandwidth and there is no further bandwidth available for that other traffic either (IE all best-effort traffic is already being dropped). And if that's happening, you need either more bandwidth, or less EF traffic.

The reason we don't put EF at 90% all the time (and I see tons of people do it) is because realistically your EF traffic shouldn't be eating up that much of your bandwidth, and if it is, you're usually tagging far too much of your traffic as EF. If it IS taking up that much of your bandwidth because you're moving primarily realtime traffic, you'll want to use a priority statement instead of a bandwidth statement. And if you're using a priority statement, you need to be more careful as to how much bandwidth you allocate to certain queues as it will police traffic that exceeds the bandwidth provided. For example, you don't want to priority percent 10 your AF41 traffic because if you use more than 10% of that traffic you'll start dropping critical data. In addition, if your EF traffic is holding so much priority (due to sheer quantity), AF41 traffic will be routinely delayed while your EF traffic is dropped in the transmit ring ahead of it. This is probably the scenario you were thinking of.

Ultimately, whether you use priority or bandwidth depends upon how flexible you want your queues to be and how important it is that your higher priority queues have low latency at the expense of your other queues.
 
Last edited:
Electro, in the following code:
Code:
class-map match-any QOS_REAL-TIME
 match ip dscp 46 (alternatively match ip dscp ef)
class-map match-any QOS_CRITICAL-DATA
 match ip dscp af41 (alternatively match ip dscp 34)
 match access-group name IMPORTANT-SHIT

Why did you use af41 instead of specifying dscp 34 under CRITICAL DATA, but you specified dscp 46 for REAL TIME? Or does it recognize both as equal?
 
Electro, in the following code:
Code:
class-map match-any QOS_REAL-TIME
 match ip dscp 46 (alternatively match ip dscp ef)
class-map match-any QOS_CRITICAL-DATA
 match ip dscp af41 (alternatively match ip dscp 34)
 match access-group name IMPORTANT-SHIT

Why did you use af41 instead of specifying dscp 34 under CRITICAL DATA, but you specified dscp 46 for REAL TIME? Or does it recognize both as equal?

Yep, it was meant to illustrate that you can use either/or.
 
I'd actually disagree with that. If you use a bandwidth statement instead of a priority statement (as I did in my example), available bandwidth in one queue can be shared with other queues that are congested, and thus you can make a 90% EF queue and your other traffic will not suffer unless your EF traffic is actually taking up 90% or more of your bandwidth and there is no further bandwidth available for that other traffic either (IE all best-effort traffic is already being dropped). And if that's happening, you need either more bandwidth, or less EF traffic.

We'll have to disagree then. Priority Queuing does provide a dedicated bandwidth guarantee, but it also guarantees that traffic is put on the wire ahead of your other queues. Latency and Jitter are killers with voice and video; without using a priority queue your queuing mechanism and traffic profile can have a big effect on Jitter. (WFQ, CBWFQ, etc.)

Your second paragraph confuses me a little, you can only use priority on one queue. If you have a PQ for your EF traffic, you would still use the bandwidth statement as normal for your other queues.

I'd say if you are marking so much of your traffic EF that it is 90% of your traffic, 1. you are classifying too much traffic as EF and defeating the purpose of using QoS in the first place, 2. you really need to look at what codecs you are using for voice and video (most devices now days use adaptive codecs so they should step down to much lower bit rates when bandwidth is tight) 3. You really need to look at upgrading your link.


Here is a good Cisco doc that talks about the difference between the bandwidth and priority statements.
http://www.cisco.com/en/US/tech/tk543/tk757/technologies_tech_note09186a0080103eae.shtml
 
Let me pick your guys' brain a bit more. I'm having issues adding a second site to the Metro-E.

I added Site 1 to the Metro-E, while the rest are still point-to-point, and I got that working fine.
Today, I added a Site 2 to the Metro-E, and it's causing an issue with Site 1. Each of the sites have their own router.

Site 2 will come up and be able to ping everywhere, but it will bring Site 1 down. Then vice versa.

There aren't any duplicate IP addresses. I can't seem to figure out why it's doing that.

Any suggestions, or guidance is greatly appreciated.
Thanks!
 
We would need to see the routing tables from both routers, and probably other config info to assist.

Also, if the two Metro-E sites share the same PE router, that could cause an issue. I've seen that before.
 
Looks like there was no route entry on Site 1's router to Site 2's, even though they were able to ping each other.

I statically added it, and I'll give it a shot tomorrow morning.
 
We'll have to disagree then. Priority Queuing does provide a dedicated bandwidth guarantee, but it also guarantees that traffic is put on the wire ahead of your other queues. Latency and Jitter are killers with voice and video; without using a priority queue your queuing mechanism and traffic profile can have a big effect on Jitter. (WFQ, CBWFQ, etc.)

Your second paragraph confuses me a little, you can only use priority on one queue. If you have a PQ for your EF traffic, you would still use the bandwidth statement as normal for your other queues.

I'd say if you are marking so much of your traffic EF that it is 90% of your traffic, 1. you are classifying too much traffic as EF and defeating the purpose of using QoS in the first place, 2. you really need to look at what codecs you are using for voice and video (most devices now days use adaptive codecs so they should step down to much lower bit rates when bandwidth is tight) 3. You really need to look at upgrading your link.


Here is a good Cisco doc that talks about the difference between the bandwidth and priority statements.
http://www.cisco.com/en/US/tech/tk543/tk757/technologies_tech_note09186a0080103eae.shtml

I'm not sure why you disagree with me then because what you wrote is almost exactly what I said in my post. :confused: Of course priority provides a bandwidth guarantee, but it's also a maximum bandwidth guarantee when the circuit is congested. This is because it has a built-in policer. I know the difference between the two and that's what I was illustrating. Perhaps you misunderstood what I meant?

It's not the size of the EF queue that matters, it's the amount of EF traffic you're putting into it and whether or not your other queues are going to have sufficient guaranteed bandwidth. I've seen 90/9 policies work for years without issues. Often times they're simply giving EF far more bandwidth than it needs and just ensuring it has priority heading off the interface, but if their silver (we use AF41 in general) queue doesn't use a lot of traffic either, it's not a big deal, and allows for EF traffic growth over time as more phones get put in, etc.

I usually recommend something more conservative myself, but it all depends upon the application, and I tend to steer away from "rules of thumb" in QoS because there's so much variation in implementation.
 
Last edited:
Back
Top