Using the default QOS settings (buffers 25 25 25 25, DSCP 0 in Q2, etc..), I see a lot of output drops.
Now the switch is running with the AutoQOS generated queues (DSCP 0 in Q4, etc..), and these settings are better:
Switch# sh mls qos queue-set
Queue : 1 2 3 4
buffers : 10 10 26 54
threshold1: 138 138 36 20
threshold2: 138 138 77 50
reserved : 92 92 100 67
maximum : 138 400 318 400
Queue : 1 2 3 4
buffers : 16 6 17 61
threshold1: 149 118 41 42
threshold2: 149 118 68 72
reserved : 100 100 100 100
maximum : 149 235 272 242
However, some ports are still experiencing drops (for example one that is running constantly at 30 Mbps/4000pps outbound, max) but it is very strange: 1 server goes up to 50 Mbps (50%) load outbound without drops, while others already start dropping from 3-5Mbps outbound. This started me thinking it might be related to pps instead of bytes per second.
I have done a test in the lab, and -with the same QOS settings- i can get 50Mbps/52000 pkts/sec out easily without any drop on a 100Mbps interface.
Therefore the only thing left for me to think about is the shared buffer structure of the C3750E. There might be an overload on the ASIC buffers, so that on one time t a server on port 1/2 is taking up all buffers so that a server on port 1/3 already needs to start dropping at very low rate.
Is there any way i can see the length of the queue and if the switch is tail-dropping ? Can i see the length of the queue globally (on asic level ?) and see if their are drops here (show buffers command maybe ?)
PS. the switch is running 12.2(50)SE2 and only doing L2 switching.
Ports are all configured as:
srr-queue bandwidth share 10 10 60 20
srr-queue bandwidth shape 10 0 0 0
queue-set 1 (default)
* Also, do the inbound and outbound direction share buffer space ? I have the impression that ports with 50 Mbps inbound load start to drop much faster outbound (already from 5-10 Mbps)
"* Also, do the inbound and outbound direction share buffer space ? I have the impression that ports with 50 Mbps inbound load start to drop much faster outbound (already from 5-10 Mbps)"
From documentation, my impression it's different buffer space, but not 100% certain. Believe I've read Cisco usually doesn't see the inbound queue as an issue. (Which makes sense since egress queuing is generally congestion point because egress port doesn't have sufficient bandwidth to support ingress port or ports.)
Examination of MLS QoS port stats, and where the drops are happening, would be good place to start if you're going to attempt to tune egress queues.
What's not clear from Cisco documentation, and provided stats, might be how best to allocate buffers to be reserved or common. Further, there might not be stats to indicate whether drops are from hitting WTD, or lack of buffer space.
Interestingly enough the buffer settings created by Auto-QoS are different if you use different IOS versions. On 3560/3750 Cisco has changed the values for the buffers somewhere between 12.2(40)SE and 12.2(50)SE - as far as I remember.
So even though it is stated that you *should* not tune buffers, Cisco seems to have determined it to be needed. Perhaps someone inside Cisco can / will comment on this?
Hi everyone, I would like to thank you in advance for any help you can provide a newcomer like myself!
Im studying the 100-105 book by Odom and am currently on the topic of Port security. I purchased a used 2960 and I'm trying to follow a...
While deploying a number of 18xx/2802/3802 model access points (APs), which run AP-COS as their operating platform. It can be observed on some occasions that while many of their access points were able to join the fabric WLC withou...
I am going to design and build an LAN network under a tunnel underground with long distance between the switches.
I will have 2 Catalyst switches and 8 Industrial IE3000, and they will be connected with fiber.
For now I am planning on use Layer-2 s...