Sarala Akella (CCIE 29921) is a customer support engineer with the Cisco Technical Assistance Center WAN Team, where she focuses on various WAN-related issues along with QoS issues on various interfaces. Akella has been with Cisco for 11 years and has worked as a software engineer in the Network Software and Systems Technology Group. She holds a master's degree in computer engineering from Santa Clara University and a master's degree in mathematics from Osmania University, India.
The following experts helped Sarla to answer few of the questions asked during the session:
Satya Narasimhamurthy, Rajat Chauhan,Raymond Stanislaus, and Saurabh Chatterjee.
You can download the slides of the presentation in PDF format here. The complete video recording of this live Webcast can be accessed here.
HQF Related Questions
Q. What is HQF?
A. Hierarchical Queuing Framework is a general and scalable infrastructure for supporting a set of QoS features such as shaping, low latency queuing, guaranteed bandwidth, flow-based fair queuing, and WRED.
Q. In which release is HQF supported?
A. HQF is supported in 12.4 (20) T and above. It is recommended to use 12.4(24) T and above.
Q. In slide 33, a) queuing components is misleading. It shows that bypass happens after policing for pri Q and WRED etc. Is this not incorrect? no congestion therefore policing, dropping etc. b) what does it mean by 'bypass possible'?
A. a) In HQF, policing, WRED, and other parameters are taken into account first and then all the packets get enqueued. Once we know that the packet can be queued and sent through the physical interface, that’s when we enqueue the packet into different classes.
b) Back pressure happens because of the over subscription of the physical interface. If there is no back pressure in the interface, that means the packets can bypass these whole QoS policies. Basically, QoS kicks in when there is congestion. If there is no congestion and there is bandwidth available in different classes, then definitely we can send the traffic. Queuing is needed only when there are more people. Therefore, “bypass possible if no congestion” means that when there is no congestion there is no need for packets to be queued and go in to the physical interface. The packets can bypass and any packet that comes in can go out of the physical interface.
Tx-ring Related Questions
Q. What is tx-ring-limit?
A. Tx-ring limit is a command to control the number of packets in the hardware queue of the interface.
Q. Are there some recommendations for tuning the tx-ring buffer?
A.The tx-ring buffer can be tuned if there is voice in the network and you want to control the voice packets being delayed. Packets go into the software queue and then fall in to the hardware queue. The tx-ring-limit controls the queue depth of the physical interface.
In case you have done everything to control the problem with the voice but are still having voice quality issues, you can tune the number of packets sitting in the hardware queue with tx-ring-limit and reduce the tx-ring-limit to maybe 3 or so and then see how the network is behaving. The recommended value is around 3-4.
Command Related Questions
Q. How can I prioritize traffic based on the URL?
A. To prioritize the traffic based on URL, you can use the match protocol command, which is a Network-Based Application Recognition (NBAR) command under the class map configuration mode, and have this class map called in policy map and assigned the priority value.
Router (config)# class-map TEST
Router (config-cmap)# match protocol http URL "......."
Q. What is the "aggregate" in the command random-detect dscp-based aggregate do?
A. In WRED, if you want to drop the DSCP packets at certain rates, it takes in to account the DSCP bits and drops those DSCP packets if they are queued more than whatever the maximum probability you have configured.
For example: random-detect dscp 8 24 40
In the above example, WRED drops the packets based on DSCP 8 with a minimum threshold of 24 and a maximum threshold of 40.
Q. What is the difference between match-any and match-all in the class-map commands?
A. The match all command is used to match the traffic if all the match criteria are met; the match any command is used to match if any of the criteria is met when there are many listed.
For example, if there is an access group match and there is a DSCP setting ef, then it needs to match both. In this case, the match all command is used to match the traffic that is coming only from access-list 1.1.1, and it is ef. The match any command is used to match any one of the criteria, the traffic coming from the source access list 1.1.1 and also ef traffic coming from any other source (i.e. if the traffic is ef and if it has any match with protocol or real time protocol). You can add as many as needed so it will match any of those lists, and the traffic will be added to that class.
Q. Why should I use priority command for voice?
A. Voice packets are very small and are delay sensitive. So if they get queued in a different queue with data packets, the voice packets must wait until the data packet goes out, which results in propagation delay. Using the priority command, the voice packets are placed in a separate queue. The packets with the highest priority are placed in front of the queue. For example, if there are four queues, the priority queue packet is placed in front.
Q. What do you mean by minimum bandwidth guarantee? What is the command used?
A. For example, if there is 10 Mbps at the time of congestion, one should be able to send at least 10 Mbps of traffic at the time of congestion in the network. Using the bandwidth command, it can be configured in Kbps or a percentage of the whole interface (i.e. bandwidth percent or bandwidth value in Kbps).
Q. What do you mean by maximum bandwidth? What is the command used?
A. Maximum bandwidth implies the amount to which I can burst. The command is shape average followed by the value in bits per second.
Q. How can I adjust the min-threshold value for the buffer?
A. Min-threshold on the service policy cannot be adjusted. This value can be seen using show policy-map command.
Queue-limit Related Questions
Q. Are there best practice values for configuring queue-limits?
A. Yes. If there is voice in the network, do not configure the queue limit or give a lesser value so that it is not queued for long time. Before configuring queue limit, make sure there is voice in the network. If there is voice, do not configure queue, but if there is no voice, you can configure the queue in the network. For example, if it is a TCP type of transmission, you can stretch the queue limit to a larger extent.
Q. How queue-limit can be specified in Kbytes and not packets?
A. There is an option available when we give the queue limit whether to specify in bytes or packets. You can tweak using this CLI option.
Q. What is the recommended formula for calculating queue-depth to meet specific delay guarantee?
A. The queue-depths for each class are represented by 'aggregate limit/bytes depending on the rate of the different classes. Ensure the aggregate queue-limit of all classes within than layer < hold-queue allocated for that layer.
Shaping Related Questions
Q. How does shaping and policing differ?
A. Shaping buffers the traffic, slows down and throttles back, whereas policing drops the packet immediately after certain bandwidth.
Q. Is it possible to configure a queue-depth on a child class using the bandwidth and not actually shaping?
A.Yes. In the child class, you can configure the queue limit without configuring the shaping but with bandwidth.
Q. What is the difference between shaping and policing?
A. In shaping, we are throttling back the traffic, whatever we are sending limiting the traffic slowing down and sending the traffic by buffering the extra traffic and giving maximum and try bursting above that. In short, it’s a throttling back mechanism where traffic is slowed down and then is sent up to the max available bandwidth. In policing, we are controlling the traffic, dropping the traffic, limiting the traffic beyond the certain bandwidth.
Shaping is buffering the traffic; policing is limiting the traffic.
Q. What is the relation between token adding intervals and shaping rate? Can it be configured? If the goal is (amount of tokens added) + (burst size) < (committed burst of next router)
Shaping increments the token bucket at timed intervals using a bits per second (bps) value. A shaper uses the following formula:
Tc = Bc/CIR (in seconds)
In this equation, Bc represents the committed burst, and CIR stands for committed information rate.
The value of Tc defines the time interval during which you send the Bc bits in order to maintain the average rate of the CIR in
Q. How do you mark packets to route traffic to go out to specific interface when using MPLS?
A. Under policy-map, choose the specific class and set the MPLS exp bits.
Example: Policy map xxx; Class yyy; set mpls exp<value>
Q. What is the shaper CIR granularity for parent level shaper on C7609 ES+ line card?
Q. What does "to switching code mean" in Ingress packet path slide?
A. Switching code means the software logical queue.
Q. Is there any QoS functionality difference between the Policy Feature Card 1 (PFC1) and PFC2?
PFC2 lets you push down the QoS policy to a Distributed Forwarding Card (DFC). PFC2 also adds support for an excess rate, which indicates a second policing level at which policy actions can be taken. Refer to the
Q. How many Virtual Circuits (VCS) can support a service policy simultaneously?
A. 256 or anything between 200-300. (Presenter has said that she needs to check regarding the correct answer)
Q. Can I configure QoS in an interface that has a secondary IP?
A.Yes. QoS can be configured in an interface having a secondary IP by configuring the service policy with secondary IP in it.
Q. Why is it good idea to use FIFO in the default class?
A. All the available traffic is classified into different classes, a user defined classes and QC configured, so whatever packets are available after that could be the packets from the classes that are already defined or it could be some unclassified traffic. In order to avoid using the processing overhead for the available bandwidth for the default class, all the traffic is sent by FIFO mode.
Q. Are there any restrictions that I need to know when I configure QoS over a port channel?
A. If port channel is Layer 2 then we need configure L2 QoS with MLS qos enabled. If port channel is configured as Layer 3 then MQC should be configured.
Q. Why do we need parent child policy for sub-interfaces?
A. On the physical interface you know the available bandwidth. For example, if it is a serial interface, the available bandwidth is 1.5 Megs; for Fast Ethernet, it is 100 Mbps. However, you do not know the available bandwidth for sub interfaces. So you must shape the pipe. For example, if the provider gives 15 Mbps for an MPLS link, then the traffic needs to be shape the traffic on that link. To do that, call another QoS policy (which would be a parent or shaper) and, in the class default, apply the actual service policy and give the bandwidth (i.e.shape the outer tunnel which gives the outer pipe).
Q. How does Link Fragmentation and Interleaving (LFI) work?
A. Link Fragmentation and Interleaving (LFI) are designed more towards PPP links where you have larger packets and small packets that need to be cut into 512 bytes and sent through different interfaces. With the fragmentation, you can tell at what point you want to fragment a particular packet. Interleaving helps the smaller voice packets to come in front of the bigger packets so that the voice packets can go early (i.e. faster). Basically, the packets are divided and cut at 512 bytes, and the smaller packets are sent to the front of the queue.
Q. What is the main difference between WRED and WFQ?
A. WRED (Weighted Random Detect) controls the drop policies in the network and is a congestion avoidance mechanism. WFQ (Weighted Fair Queuing) puts all the traffic into fair queuing and all the interactive traffic jumps in front of the queue. Therefore, WFQ is a queuing mechanism while WRED is dropping mechanism.