Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 
New Member

QOS and the TX-Ring

I need to know if under severe congestion i.e 100Meg LAN to 64K WAN if its at all possible for the packets going from the LAN onto the Bottlenecked WAN to not even reach the Output Buffer or even the TX-Ring on the interface So that the IOS is unable to determine the marking on the packets and thus not give the packet its required QOS?

The reason I am asking is that exact senorio above 100Meg LAN to 64K WAN.

Traffic that was supposed to get 30% of the bandwidth was extremely slow and output drops was very high


Re: QOS and the TX-Ring


What you are describing is exactly what happens under congestion. Under congestion conditions, packets will first be queued to the configured software queues, and then de-queued from there into the tx-queue.

If you are using CBWFQ on the interface, that will kick in and determine packet scheduling order before the packets reach the tx-queue.

Hope that helps - pls rate the post if it does.


New Member

Re: QOS and the TX-Ring


We ran into a similar problem when we were doing some testing at the Cisco Proof-of-Concept lab at the San Jose campus. We were actually only blasting about 8 megs of non-stateful traffic along with voice through a little T1 WIC in a 2600, and we were dropping traffic inconsistent with the service-policy. The show policy-interface displayed the correct behavior, but we were dropping voice and BGP, which should have received priority and guaranteed bandwidth, respectively.

The Cisco lab guys got one of their premier QoS experts on the line, and he explained that at a certain point, you will simply overrun the WAN interface hardware. The bigger the blade/card, the more you will be able to blast it. But for the smaller cards, you can have the most beautiful QoS policy ever created put into place, but if you blast it with more and more traffic, at some breaking point it will just start dropping a bit of everything. He had seen it many times before, and it helped us to know that it wasn't a misconfig or bug.

Our solution was to police inbound at the LAN interface that was supplying the huge volume of traffic, and it worked like a champ. The outbound interface was still receiving more traffic than it could send across the WAN link, but due to the inbound policing that had occurred on the LAN interface, it was not overrun beyond processing capacity.

Hopefully that helps. Good luck!

Best Regards


CreatePlease to create content