cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
30307
Views
78
Helpful
120
Replies

Ask the Expert: QoS on Catalyst Switches.

ciscomoderator
Community Manager
Community Manager

With Shashank Singh  and Read the bioRead the bio

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn from Cisco experts Shashank Singh and Sweta Morga about implementation and working and troubleshooting QoS on Cisco Catalyst 2960, 3650, 3750, 4500 and 6500 switches.

Shashank Singh  graduated in 2009 with a bachelor's degree in Computer Science and Engineering from VIT University, Vellore India. Prior to joining Cisco he worked at General Electric as a software engineer. Later on he joined the Cisco Technical Assistance Center as an engineer in October of 2009. He has been working on LAN Switching technologies in TAC since then. Shashank also holds a CCNP certificate. QoS on Catalyst switches is one of the areas of his interest.

Sweta Mogra is a Computer Science & Engineering graduate from VIT University, India. She has worked as a consultant with Tata Consultancy Services before joining Cisco's Technical Assistance Center (TAC) in 2011. She is currently working on LAN Switching technologies and QoS as one of her areas of expertise.

Remember to use the rating system to let Shashank and Sweta know if you have received an adequate response. 

Shashank and Sweta might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Network Infastructure sub-communityLan Switching forum shortly after the event. This event lasts through June 1, 2012. Visit this forum often to view responses to your questions and the questions of other community members.

120 Replies 120

Jonathan THOMAS
Level 1
Level 1

Hello,

I have a question about threshold, I cant figure what is this and how it works, could you help ?

An example if you want to base your explanation on it :

Distribution1(config)#mls qos srr-queue input cos-map queue 2 threshold 3  3 5

Thank you so much.

Regards,

Jonathan.

Hi Jonathan,

Let me start with explaining threshold first. Threshold is a percentage of the total buffers available in a queue. On a 3750, each egress  queue has 3 thresholds and threshold 3 is always 100% (non configurable). This leaves us with threshold 1 and threshold 2 which can be configured to any percentage (say x% and y% respectively).

This would mean that packets with markings associated with threshold 1 will get dropped only when the queue is x% full.Packets with markings associated with threshold 2 will get dropped only when the queue is y% full.

Now coming to your question, the command "mls qos srr-queue input cos-map queue 2 threshold 3  3 5" associates input queue 2 threshold  3 to cos values 3 and 5. This would mean that cos 3 and 5 will NOT get dropped in ingress queues unless it is 100% full (as threshold 3 is always 100%).

'sh mls qos input-queue' will tell you the current threshold values for input queue threshold 1 and 2 values on 3750 platform.

Hope this helps.

Regards,

Shashank

Hi,

Yes, this is very helpful, I'm starting to understand it a lot better with your explanation.

I've got some other question for you. I warn you lol

1) About internal ring, It's like a very big buffer right ?

Who hold the packets for forwarding them to the egress queues after ? How can I know the size of it ?

2) Am I forced to do Ingress policing ? If not in what scenario do we use it ?

I've got some trouble to understand what are rate-bps and burst-byte values too

3) In general, what is the size of egress and ingress queue on a switch like a 2960 or 3750 ?

That's it for the moment, I think...

Thank you Shashank.

Regards,

Jonathan.

Hi Jonathan,

Please find answers inline.

1) About internal ring, It's like a very big buffer right ?

Who hold the packets for forwarding them to the egress queues after ? How can I know the size of it ?

When it comes to QoS buffers, they are actually present on the port asic. The size of buffer varies from one switch to another and is mostly a Cisco internal information. Ring on the other hand is an internal data path packet takes while travelling between ports or between switches stacked together and is not used for QoS.


2) Am I forced to do Ingress policing ? If not in what scenario do we use it ?

I've got some trouble to understand what are rate-bps and burst-byte values too

Policing is not a compulsion. We can do it when we want to rate limit incoming traffic to a certain value. One typical scenario could be rate limiting per user traffic going to internet to ensure that the total traffic going out to your ISP does not exceed the available bandwidth.

Rate-bps, is the average traffic rate in bits per second that we want the ingress flow to be policed to. Burst-byte is the maximum burst that is tolerated before policing starts.In other words, the upper limit to which you want your traffic to be restricted to is your rate-bps and the maximum size of the burst (in bits) that can be buffered is called burst.


3) In general, what is the size of egress and ingress queue on a switch like a 2960 or 3750 ?

Most common port asics have buffer of the order of about 2MB per port group but as pointed out, this number varies from one 3750/2960 model to another.

Regards,

Shashank


I warn you, you are going to think that I'm a pain in the ass lol

In a more general way, once I have done all my QoS configuration, how do I apply it ?

It's just some exemple to help me understand the whole picture, I don't ask you to do the all configuration lol

First case, I have one switch (just for the concept)

        --------------

        |  Switch  |

        --------------

          |       |

    fa 0/1     fa 0/3

          |       |

   Client1   Client2

I wan't to give priority to the FTP trafic in both direction between Client1 and Client2 in case of congestion

On fa0/1 :

I apply my policy map with all the hierarchy that can be contain on it (class map, ACL)

and also the configuration of the ingress queues to classify,mark and put into the queues the trafic

On fa0/3 :

I assume that I apply the same configuration that I put on fa0/1 right ? Since i wan't to prioritize the same trafic

But what I should do for the egress part ? Put the same egress configuration on both interface ?

Second case, I have two switch with a trunk between

switch 1 gi0/1 ----------------- gi0/2 Switch 2

      |                                             |

  fa 0/1                                       fa 0/3

     |                                              |

Client1                                     Client2

Same scenario like the first one for the priority thing.

On fa 0/1 and fa 0/3 : I see what I have to put for the ingress part, but not for the egress part on this interface ?

On gi0/1 and gi0/2 : I see what I have to put for the egress part, but not for the ingress part ?

I have a good picture on how the packet is processed and where is going to be sent but I don't see how the destination

process the incoming packet, you see my problem ? lol

That's a damn long post, but I'm so into the topic, it's hard not to ask questions about it, you know

Hi Jonathan,

Please find answers inline:

On fa0/3 :

I assume that I apply the same configuration that I put on fa0/1 right ? Since i wan't to prioritize the same traffic

The configs on egress will not be same as ingress as we would be using output-queues on the egress. There are separate maps that govern as to which marking will take which queue in ingress and egress (dscp-input-q and dscp-output-q). Hence the commands to map marking to queues are different when it comes to ingress and egress. All four egress queues (on a 3750 switch) participate in the SRR unless the priority queue is enabled. Once we enable priority queue, it gets higest priority and remaining three queues continue using SRR. You can make sure that your FTP traffic gets mapped to the priority queue and is treated on priority.  This is supported only on egress using 'priority-queue out' interface level command.

Second case, I have two switch with a trunk between

switch 1 gi0/1 ----------------- gi0/2 Switch 2

    |                                             |

fa 0/1                                       fa 0/3

   |                                              |

Client1                                     Client2

Same scenario like the first one for the priority thing.

On fa 0/1 and fa 0/3 : I see what I have to put for the ingress part, but not for the egress part on this interface ?On gi0/1 and gi0/2 : I see what I have to put for the egress part, but not for the ingress part ?

If your traffic is coming premarked at the source, you can simply trust the marking (mls qos trust cos/dscp) on all four ports. If however you are planning to mark traffic on the switch, we will have to apply the service policy on fa0/1 and fa0/3 inbound and apply trust command on gi0/1 and gi0/2.

For prioritizing traffic, make sure that the traffic gets priority on both switches. This can be done by enabling 'priority queue-out' on all four interfaces as discussed above and ensuring that the marking for FTP traffic is mapped to go out of the priority queue.

Regards,

Sweta

Hi,

I'm starting to understand the whole thing now.

Just one last quick question, what is the best to use, dscp or cos label ?

Or it doesn't matter because of the cos-to-dscp map ?

And can we have trafic shaping and trafic sharing at the same time ?

About the rating thing, It's all right if I rate all the answers ?

They were very useful and I think it can answer to someone else question too.

I will continue to read stuff about this topic, it's very interesting.

A big thank you to both of you Shashank and Sweta.

Regards,

Jonathan.

Hi Jonathan,

I am glad that you found this discussion useful To answer your first question, it actually does not matter because  cos-dscp  map is used by the switch to find out the equivalent dscp and then apply QoS.

Answering your second question, yes we can configure shaping and sharing at the same time for an interface but each queue will either work in shared more or shaped mode, not both. Infact the queues which are shaped do not participate in sharing. 

In shaped mode, the egress queues are guaranteed a percentage of the bandwidth, and they are rate-limited to that amount. Shaped traffic does not use more than the allocated bandwidth even if the link is idle. Shaping provides a more even flow of traffic over time and reduces the peaks and valleys of bursty traffic. With shaping, the absolute value of each weight is used to compute the bandwidth available for the queues.

srr-queue bandwidth shapeweight1 weight2 weight3 weight4

The inverse ratio (1/weight) controls the shaping bandwidth for this queue.In other words, queue1 is reserved 1/weight1 percent of total bandwidth and so on. If you configure a weight of 0, the corresponding queue operates in shared mode. The weight specified with the srr-queue bandwidth shape command is ignored, and the weights specified with the srr-queue bandwidth share interface configuration command for a queue come into effect.

In shared mode, the queues share the bandwidth among them according to the configured weights. The bandwidth is guaranteed at this level but not limited to it. For example, if a queue is empty and no longer requires a share of the link, the remaining queues can expand into the unused bandwidth and share it among them.

srr-queue bandwidth share weight1 weight2 weight3 weight4

queue1 is guranteed a minimum of weight1/(weight1 + weight2 + weight3 + weight4) percent of the bandwidth but can also eat up into the bandwidth of other non-shaped queues if required.

Yes, you are free to rate any answer on this discussion that you find useful. If there is a post that you feel has answered your question, please feel free to go ahead and mark it  "answered".

Regards,

Shashank

This is the answer i read. I would like to know the answer for a specifc config.

I want  to know q1, q2, q3 and q4 bandwidth for the following config

assum the interface badwidth is 100M

Lets say shaped queue weight: 20 0 0 0

             shared queue weight: 30 40 50 60

I think q1 will be 1/20 times 100 = 5M

q1 should not participate in sharing, so q1 will be fixed 5M rate limit

but q2 will 40/(40+50+60) times 100  and that is not rate limit. please confirm it

what is the use of config 30 for q1 for sharing ?

what will be the queues bandwith if egress priority queue is enable ?

Answering your second question, yes we can configure shaping and sharing at the same time for an interface but each queue will either work in shared more or shaped mode, not both. Infact the queues which are shaped do not participate in sharing.

In shaped mode, the egress queues are guaranteed a percentage of the bandwidth, and they are rate-limited to that amount. Shaped traffic does not use more than the allocated bandwidth even if the link is idle. Shaping provides a more even flow of traffic over time and reduces the peaks and valleys of bursty traffic. With shaping, the absolute value of each weight is used to compute the bandwidth available for the queues.

srr-queue bandwidth shapeweight1 weight2 weight3 weight4

The inverse ratio (1/weight) controls the shaping bandwidth for this queue.In other words, queue1 is reserved 1/weight1 percent of total bandwidth and so on. If you configure a weight of 0, the corresponding queue operates in shared mode. The weight specified with the srr-queue bandwidth shape command is ignored, and the weights specified with the srr-queue bandwidth share interface configuration command for a queue come into effect.

In shared mode, the queues share the bandwidth among them according to the configured weights. The bandwidth is guaranteed at this level but not limited to it. For example, if a queue is empty and no longer requires a share of the link, the remaining queues can expand into the unused bandwidth and share it among them.

srr-queue bandwidth share weight1 weight2 weight3 weight4

queue1 is guranteed a minimum of weight1/(weight1 + weight2 + weight3 + weight4) percent of the bandwidth but can also eat up into the bandwidth of other non-shaped queues if required.

Amit23
Level 4
Level 4

Hello guys...

I need your help to learn about QOS...

i tried to study books and videos many times but still feel i am not well known about QOs...

can you tell me what is best and which way is good to get full knowledge in QOS and feel me better in QOS?

thanks

Warm Regard's
Amit Sahrma

Hi Amit,

I know this question was directed to Shashank / Sweta, but would like to answer it. I understand you have gone though multiple videos / books. I would still suggest you to go through "Kevin Wallace" QoS videos - they are great.

At least i know that you may be knowing why we require QoS. Now, the only thing to understand would be the different tools available to overcome the issues.

Congestion Management : FIFO, WFQ, CBWFQ, LLQ (Queuing)

Congestion Avoidance : WTD, RED, WRED

Traffic Shaping & Policing

Kevin Wallace has his own site (www.1examamonth.com). I am not advertising anyone's site here, but its one of the best videos which i must highlight.

Thanks

Vivek

Hi Amit,

Vivek has suggested some excellent QoS resources. I would just like to add a few points.

Though the underlying theory remains the same, understanding and configuring QoS on switches requires a certain degree of platform knowledge. This is mainly because Switches are designed to perform QoS in hardware (ASICs) unlike most of the Routers which depend on the IOS for the same.

For example, on Catalyst 6500 switches, QoS is performed by the PFC (Policy Feature Card) on the supervisor engine and  hence it is important to have a prior understanding of what PFC is and how it works.

Aother good resource for understanding QoS on switches are the platform QoS configuration and troubleshooting documents available on Cisco.com. As each switch platform implements QoS in a different way, there are separate documents available for each platform. These documents provide a comprehensive perspective of QoS configuration and troubleshooting on respective switch platforms.

Hope this helps.

Regards,

Shashank

lcd_shouldit
Level 1
Level 1

Hi Shashank / Sweta,

I have few questions:

1. Is it necessary to implement Qos on Catalyst, for the interfaces are almost 1G or 10G speed.

    If it is necessary, why?

2.When config QOS on Catalyst, we should map COS or DSCP to different Queue.

    I have read some CISCO documents, look like there should some standard and recommended map,but the map config in     these documents are not same, so would you please show one standard and recommended config about 

    COS or DSCP mapping to different Queue.

    Or how do you config these mapping.

3.When config QOS on Catalyst 3750, there are two parameters,which are bandwidth and buffer.

    I want to know how they works, if there is congestion on a port,which parameter will 3750 consider first.

Thank you~

Disclaimer

The  Author of this posting offers the information contained within this  posting without consideration and with the reader's understanding that  there's no implied or expressed suitability or fitness for any purpose.  Information provided is for informational purposes only and should not  be construed as rendering professional advice of any kind. Usage of this  posting's information is solely at reader's own risk.

Liability Disclaimer

In  no event shall Author be liable for any damages whatsoever (including,  without limitation, damages for loss of use, data or profit) arising out  of the use or inability to use the posting's information even if Author  has been advised of the possibility of such damage.

Posting

changdong liu wrote:

Hi Shashank / Sweta,

I have few questions:

1. Is it necessary to implement Qos on Catalyst, for the interfaces are almost 1G or 10G speed.

    If it is necessary, why?

Any interface that can be offered more bandwidth than it can transmit (e.g. >10 gig in to 10 gig out) can congest.  Congested interfaces cause queuing delay and/or frame/packet drops.

If congestion is enough is adverse to application(s), then QoS might be used to favor some traffic at the expense of other traffic.

Typically QoS is used to manage interface congestion; via traffic prioritization and/or traffic drop preference.    I.e. some traffic can be provided reduced queuing latency and/or drops while other traffic experiences increased queuing delay and/or drops.  For example, if there's both VoIP and FTP traffic passing across a congested interface, we could use QoS to "move" queuing delay and/or drops to just impact the FTP traffic.

BTW, with "fast" interfaces (usually FastEthernet and faster), transient congestion is more likely to cause buffer exhaustion (on some switches) rather than a latency delay issue.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: