I have a 100m WAN link between 2x 3750s both terminating on gig ports, but set to 100m FD.
I am seeing traffic be restricted to around 25%, with discards appearing beyond this level. We are using QoS to capture VOIP traffic (cos=5) and everything else is defaulted (cos=0). Cos5 is mapped to priority queue, and provided 10% of the WAN, everything else 80% (COS0).
My understanding is the shared queue means each queue is provided 25% as a minimum bandwdith, and then if a queue is empty, can overspill into the other queues bandwidth. However, how does this work in relation to the buffer setting? Does buffer setting overrule the shared queue settings? If so, I would expect cos0 that is mapped to queueset1, Q2, to have upto 80% bandwidth....
Can anyone please explain why this seems the case, or where I'm going wrong....
interface GigabitEthernet1/0/24 priority-queue out CARDGA_A_CC375_06# sh mls qos inter g1/0/24 queueing Egress Priority Queue : enabled ---> Queue 1 is set as a priority queue via the above interface config Shaped queue weights (absolute) : 25 0 0 0 ---> shaped setting for Egress Queue 1,2,3,4 respectively Shared queue weights : 25 25 25 25 ---> shared setting for Egress Queue 1,2,3,4 respectively When the priority queue is enabled with the “priority-queue out” interface config command as you have done above then SRR (shaped round robin) services priority queue 1 until it is empty before servicing the other 3 queues (queue 2,3,4). When the egress priority queue is enabled it overrides the SRR shaped and shared weights for queue 1 and those settings are ignored for queue 1. The remaining 3 queues (queue 2,3,4) have no shaped setting (setting = 0) and so fall back to shared mode. The shared mode setting for queue 2,3,4 is 25% for each. This means that each queue gets 25% bandwidth before advancing to the next queue. If one of these 3 queues was not using its full 25% portion (ie little or no packets in that queue) then we immediately advance to the next queue and the other queues get more then their guaranteed 25%. Again, these 3 queues would only be serviced this way while priority queue 1 was empty. So to recap how things work, Priority queue overrides shaped which overrides shared when configured for a given queue. The difference between shaped and shared is that a shaped value caps you at that bandwidth utilization and you will not get more while with shared you are guaranteed at least that amount and you may even get more if the other queues are not using their share. The above settings involve how the 4 egress queues are serviced (how the packets are emptied from them) but they do not specify how big a queue is. A queue is basically a buffer and so should have a given memory size. Changing the “buffer” setting is how you control the size of the queue. The Catalyst 3750 switch uses “buffer sharing” where unused interface buffers are returned to a “common pool” of buffers that can then be temporarily borrowed by any interface on the switch as required. Each interface starts by being allocated a fixed amount of egress buffer. A portion of this fixed buffer is reserved by the interface while the rest is made available to other interfaces when not in use. This allows for much more efficient use of the global buffer resources of the switch. Once QoS is enabled on the switch the buffer assigned to each interface is carved into 4 distinct Queues in order to differentiate between different priority traffic. “buffers”: This field defines how much (as a percentage) of the fixed amount of egress buffer that is allocated to an interface is assigned to each of the 4 queues associated with that interface. In the default configuration each of the 4 queues received an equal 25% of the buffer. In your modified configuration Queue # 1 was decreased to 10%, Queue #2 was increased to 80%, and Queue # 3 & #4 were decreased to 5%.
“Reserved”: This field defines as a percentage how much of the “buffers” allocated to a given queue is reserve by that queue (no other interface or queue may borrow these buffers) and how much is returned to the common pool (a pool of buffers which may be temporarily borrowed by other queues on any interface in the switch that need them when they are not in use). In the default configuration all 4 queues only reserved 50% of the “buffers” allocated to them and returned the remaining 50% to the common pool.
“maximum”: This field defines as a percentage the maximum amount of buffers a queue may borrow from the common pool beyond the “buffers” that were allocated to it. By default we can borrow up to 400%. This temporary borrowing only occurs when the queue associated with a given interface requires them (all reserved buffers for that queue on that interface are used up). When the borrowed buffers are no longer needed they are returned to the common pool.
“Threshold”: This field defines as a percentage the fullness of a given queue before drops occur. Since more then one QoS value may be mapped to a given queue, multiple thresholds are defined to differentiate drops for different QoS values within that queue.
In summary, the priority/shaped/shared settings found in the “sh mls qos inter g1/0/24 queueing” command dictate how the egress queues are serviced (emptied) while the setting found in the “sh mls qos queue-set” command dictate how many buffers are assigned to each of those queues.
Just to add to the above information, here are some commands to help monitor the 4 egress transmit queues associated with a given interface...
Some points to note: -The queue numbering in the “show platform” commands below is zero based so queue 0,1,2,3 actually corresponds to queue 1,2,3,4 respectively -The “enqueue” command gives a count of the number of frames serviced by a given threshold (weight) for a given queue for the interface specified -The “drop” command gives a count of the number of frames that were dropped for a given drop threshold (weight) for a given queue for the interface specified -The “maps” commands tell you which CoS/DSCP values are mapped to which drop threshold (weight) of which queue -In the “sh mls qos queue-set” output in the original thread post above you only see 2 drop thresholds (weights) specified while in the “show platform” commands below you see 3 drop thresholds (weights). This is because while 3 drops thresholds exists per queue only 2 of them are configurable and the 3rd is hard set to 100%.
NOTE: how to read the above "show mls qos maps dscp-output-q" command output. d1 = the 1st number in the DSCP value and d2 = the 2nd number in the DSCP value. For example, a DSCP value = 46 means that d1 = 4 and d2 = 6. In the table d1 = 4 intersects d2 = 6 at value 01-01. This means that DSCP 46 is mapped to drop threshold 1 of queue 1 (the priority queue).
Hi everyone, I would like to thank you in advance for any help you can provide a newcomer like myself!
Im studying the 100-105 book by Odom and am currently on the topic of Port security. I purchased a used 2960 and I'm trying to follow a...
While deploying a number of 18xx/2802/3802 model access points (APs), which run AP-COS as their operating platform. It can be observed on some occasions that while many of their access points were able to join the fabric WLC withou...
I am going to design and build an LAN network under a tunnel underground with long distance between the switches.
I will have 2 Catalyst switches and 8 Industrial IE3000, and they will be connected with fiber.
For now I am planning on use Layer-2 s...