cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1845
Views
23
Helpful
12
Replies

Policing & CBWRED in QOS

ryel.dsouza
Level 1
Level 1

Is Policing & CBWRED similar in QOS ????

According to me cbwred is used for congestion avoidence u are able to specify when do packets get dropped randomly and when they are tail dropped to avoid congestion.

Policing can also be used to avoid congestion by specifying when to tail drop packets at the same time u can also change the priority of the packets such as DSCP and so on when the maximum number of packets u have specified in policing has reached.

Kindly correct me if i have got it wrong ????

12 Replies 12

Joseph W. Doherty
Hall of Fame
Hall of Fame

"Is Policing & CBWRED similar in QOS ????"

Similar, well policing measures tranmission rate, it can drop or remark packets that exceeds the specified rate. Rate measurement has nothing to do with congestion.

WRED drops packets when there's queue congestion. No queue congestion, WRED isn't active.

So, unsure how similar the two are beyond both can drop packets and both can work with ToS markings.

marikakis
Level 7
Level 7

Hello,

In some sense I think you are right. Both methods seem to have a congestion avoidance flavor. But then again, one could claim that all QoS tools have an ultimate goal to avoid congestion.

WRED pro-actively tries to manage a respective queue, taking into consideration the average queue length (number of entries/packets in queue). When WRED reaches a point where it discards all packets, this is not called "tail drop". Some call it "full drop", which is probably more correct, because "tail drop" usually refers to a full queue dropping new packets (while the queue is not really full when WRED goes to the "full drop" phase).

Policing is usually put in place to make sure a rate contract is respected at network boundaries. Policing is not using "tail drop", simply because it is not monitoring any kind of queue. The metering function measures rate and bits/bytes, not queue length. (The queue as a data structure does not contain the packets themselves; rather it contains pointers to packets. You can consider the "queue length" similar to "number of packets".)

So, I think WRED and policing are used in different situations, in different positions in the network (policing is usually an edge function), they measure different things to do their job (queue length vs rate) and they are being used to affect different types of traffic (RED is used to influence TCP flows mostly).

When policing marks packets with a DSCP which implies packets are discard-eligible, it is more of helping to mark non-conforming traffic. If congestion develops, then tools like WRED can take into consideration the DSCP markings (from policing or other classification/marking procedures) and more actively try to avoid the worse by dropping packets.

Kind Regards,

M.

p.s. I was probably too concentrated thinking about this that I failed to see the previous response posted in the meantime. Anyway, it seems to me that we have an agreement in key points.

so in short cbwred is used for congestion avoidence of packets in queues after classification & marking where as policing is used to monitor the transmission in bits/sec on an interface/policy map and if it transmits more than what is set we can either drop the packet or change the dscp and so on ??

Yes, I believe this is pretty much it.

I would suggest WRED is more about congestion management rather than congestion avoidence since it's activated by congestion.

From http://www.cisco.com/en/US/docs/ios/12_2t/12_2t8/feature/guide/ftwrdecn.html:

"How WRED Works

WRED makes early detection of congestion possible and provides a means for handling multiple classes of traffic. WRED can selectively discard lower priority traffic when the router begins to experience congestion and provide differentiated performance characteristics for different classes of service. It also protects against global synchronization. Global synchronization occurs as waves of congestion crest, only to be followed by periods of time during which the transmission link is not used to capacity. For these reasons, WRED is useful on any output interface or router where congestion is expected to occur. "

Oh, and a policer not only can mark or drop packets beyond a specified rate, but it can also "color" packets. More information using a policer for that can be read here: http://www.cisco.com/en/US/docs/ios/12_0s/feature/guide/12s_cap.html

Hello,

The inventors of the original RED algorithm seem to think that RED is actually congestion avoidance (Floyd, S., and Jacobson, V., Random Early Detection gateways for Congestion Avoidance, http://www.icir.org/floyd/papers/early.pdf). Most documentation/books from cisco (e.g. QOS Exam certification guide) classify RED/WRED under congestion avoidance algorithms (or active queue management). In fact, RED, WRED, ECN are the usual methods that are used to demonstrate the concept of congestion avoidance.

It seems to me that the logic has to do with what somebody means when saying the word "congestion". Usually, what is implied is that if queue is not full, we do not have congestion. If queue is full, we do have congestion. RED/WRED try to avoid the queue full situation, so it's congestion avoidance. The text you posted says at some point "the router begins to experience congestion". What does this actually mean? Do we have congestion now or do we have clues that point to the potential of congestion in the near future?

Anyway, this could be very well an issue of the form "Is MPLS L2, L3 or L2 and a half?", "Can RIP be an L3 protocol since it uses a UDP port to operate?", "Wireless nodes are using collision avoidance (CSMA/CA), while wired ethernet uses collision detection (CSMA/CD). But, aren't wired nodes also trying to avoid collisions by listening to the medium before they send?". I try to care about how something works and how I can use it. Categorizations help to classify solutions/objects in human heads, but most human heads are not absolutely the same, and objects that are often put in the same category by humans are not entirely the same. Most interestingly, objects that are often put in different categories could actually turn out to have a lot in common :-)

Kind Regards,

M.

You're 100% that "congestion avoidance" is used within the '93 paper you reference. However, if you do some more research, I believe you will fine "congestion management" used in reference to RED too.

"Congestion avoidance" appears to imply providing congestion feedback information to the transmitting flow so it, the flow, can avoid making more of it and attempt to decrease it. You'll find "congestion avoidance" common in this usage, perhaps starting(?) with Congestion Avoidance and Control V. Jacobson, ACM SIGCOMM-88, August 1988. In the same paper you'll find the term "congestion control" for the gateway side, although that's left for future work.

If you read Dr. Floyd's (later) 2000 paper, http://www.icir.org/floyd/papers/TCPreport.pdf, she uses the term "congestion control" rather than "congestion avoidance". Although she isn't focused on RED, she does touch upon AQM. What's interesting, this paper is discussing what's TCP doing, not gateways. I.e., I would expect to see "congestion avoidance" but this later paper might represent an evolution in thinking on the subject.

I agree you'll often find RED mentioned in reference to AQM (active queue management). With AQM the focus shifts from the flow's perspective to what we might be done to actively manage queues on a gateway device. We certainly can manage queues to keep them at certain sizes, but one goal is to manage them to provide congestion feedback to flow, so we're also performing congestion management, RED being one of the techniques.

In other words, RED might be considered "congestion management" from the view of the network device and "congestion avoidance" from the view of the flow. This distiction might be of benefit when you discuss setting RED parameters (BTW Dr. Floyd has a nice web page with links to more papers on RED http://www.icir.org/floyd/red.html).

You write "It seems to me that the logic has to do with what somebody means when saying the word "congestion". Usually, what is implied is that if queue is not full, we do not have congestion." I believe if there are any packets ahead in an interface queue, even one, there's congestion. Of course, then the question isn't whether there is or isn't congestion but how bad it is.

From the paper reference you provided, describes in its introduction "Therefore, with increasingly high-speed networks, it is increasingly important to have mechanisms that keep throughput high but average queue sizes low." I.e., there's more to RED's purpose than to just avoid a queue filling.

I also agree with your summary of how human categorize. Often terminology, and language, have shades of meaning, so although we use terminology to try to be precise, we often fail. In this case, that's why I suggested RED, on a network device, be considered "congestion management" since we, on the network device, are trying to manage congestion. TCP "congestion avoidance" then are its rules in response to drops or ECN. Since TCP's "congestion avoidance" will trigger for any drop, including FIFO taildrops, I suggest some way to treat the two differently in terminology. Using the same term for both, I believe, is confusing unless you also provide context.

Hello,

It seems you have spent time and effort in this and I respect that (the rating of your post is implied). I only have one objection and that would be on your definition of congestion. Even a single packet in an interface queue is congestion? I think when most people (correct me if I am wrong) think of the word congestion (in any context, either in the network or on the streets), they visualize some kind of (even slight) service deterioration (and that's all we care about, right? who cares if there is a single packet in some queue if their VoIP call sounds perfect?). A fundamental mission of interface queues is to accomodate the fact that occasionally many inputs might send data to the same output at the same time (the term I am aware of for this situation is "output contention"). The illusion of full-duplex communication between ports connected in a single switch needs switching fabric and buffers to make seem like it is true. With your definition, it seems to me that every network is congested from the moment it was conceived (which might well be true in some sense, but then again it also seems like a somewhat extreme argument).

Kind Regards,

M.

Yes, one packet in an interface queue is congestion - it delays the next packet. The real question is whether interface congestion is an issue or not.

"who cares if there is a single packet in some queue if their VoIP call sounds perfect?)." Well if the VoIP call does sound perfect, perhaps care is needless, but even one packet can impact VoIP. I draw your attention to the purpose for LFI on slower bandwidth links, i.e. how one large packet ahead of a VoIP packet can be a real issue.

BTW: In the real-world, I don't know about you, but my luck seems to be the one person ahead of me is having some type of problem, either they're counting out $100 dollars in pennies or their car just stalled, etc. ;)

"A fundamental mission of interface queues is to accomodate the fact that occasionally many inputs might send data to the same output at the same time . . ." Actually the queue is there because the ingress rate can exceed the egress rate. Multiple inputs aren't the only reason, speed mismatch will cause it too.

"The illusion of full-duplex communication between ports connected in a single switch needs switching fabric and buffers to make seem like it is true. With your definition, it seems to me that every network is congested from the moment it was conceived (which might well be true in some sense, but then again it also seems like a somewhat extreme argument). " Perhaps, or perhaps not. You're making assumptions where the traffic flows. It's certainly possible we might know (or control) what hosts will communicate with which hosts. We can also have asymetric bandwidth designs, actually common for uplink and/or server ports compared to user host ports.

Inface congestion is packets waiting in a queue, however the significance of this depends on many other variables. To allow TCP bulk data transfer to fully utilize the bandwidth on a LFN, might require a huge queue on network device. You might now consider this interface congestion, but it's ideal for the application. Conversely, as mentioned above, just one packet ahead of VoIP packet could be a problem, yet you might not consider this congestion. So, I argue there's a difference between interface congestion and whether it's significant.

I am aware of the fragmentation solution for slow speed links, but I thought this more of a serialization delay issue, rather than a queue congestion issue. I think we agree in the main points. We just phrase some sentences differently. For example, you talk about "speed mismatch", while I think that the model of a switch with many equal bandwidth ports is generic enough to model the speed mismatch situation as well (e.g. 2 ports sending at full rate towards a 3rd port).

It seems cisco has a definition for congestion in the following document: http://www.cisco.com/en/US/tech/tk543/tk760/technologies_tech_note09186a0080108e2d.shtml

It is stated there that "Functionally, congestion is defined as filling the transmit ring on the interface." So, I will surrender by saying my last words: Several QoS tools when configured have the need to reduce the size of the transmit ring for them to be effective, so filling the ring becomes rather easy, and with this definition it seems that the QoS tools themselves are causing "congestion" from time to time.

BTW: I live in a country that has (how can I say it politely?) little respect for fifo queues. Besides the situations you describe, I keep finding myself failing to be "transmitted" even when I am in the process of being serialized (and I am not even a large packet)! This could be called a VoIP friendly country ;-)

From your reference, and what you're quoting, I'm not too keen on their "Functionally" definition about filling the tx ring. However, I won't digress further on why.

I do like their "Conceptually, congestion is defined by the Cisco IOS software configuration guide as: "During periods of transmit congestion at the outgoing interface, packets arrive faster than the interface can send them."".

I don't disagee with your description (in your previous or prior posts) about multiple ports sending to the same port, just that it wasn't the only reason for an egress port to be congested. In fact, the very next paragraph in your reference, after the prior quotation, "In other words, congestion typically occurs when a fast ingress interface feeds a relatively slow egress interface. A common congestion point is a branch-office router with an Ethernet port facing the LAN and a serial port facing the WAN. Users on the LAN segment generate 10 Mbps of traffic, which is fed into a T1 with 1.5 Mbps of bandwidth." describes such as I had in mind. Typically, multiple ports sending to one port is a cause of congestion within a LAN switch, while speed mismatch is often the situation in a WAN router.

The other point I was trying to make, although probably not well, within a LAN, bandwidth is usually much less expensive so one might be able to avoid some potential bottlenecks by design. LAN to WAN is often more difficult to design to avoid congestion bottlenecks.

"I am aware of the fragmentation solution for slow speed links, but I thought this more of a serialization delay issue, rather than a queue congestion issue." You're correct in the sense of queuing at the packet level, but interface serialization is really queuing at the bit level.

Maria, I've enjoyed this discussion; hope you have too. Ryel, hope both Maria's and my posts were interesting to you too, even if we went beyond your original question.

I like the "queuing at the bit level" phrase of yours when you refer to serialization. It's another perspective and a good insight I believe.

Joseph, sure the discussion with you was an intellectual challenge ;-) The discussion turned out to be a QoS overview!

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card