QoS question

Answered Question
Sep 22nd, 2010

Hi,

Have this Gig interface directly connected to a 10Mb link to the ISP.

GigabitEthernet0/0 is up, line protocol is up
  Hardware is MV96340 Ethernet, address is 0025.45f2.09a0 (bia 0025.45f2.09a0)
  Internet address is x.x.x.x/30
  MTU 1500 bytes, BW 10000 Kbit/sec, DLY 100 usec,
     reliability 255/255, txload 75/255, rxload 61/255

There's QoS configured outbound for a TCP application to provide 30% of interface bandwidth. 

interface GigabitEthernet0/0
bandwidth 10000
service-policy output ARCUS-QOS

Which values do I have to see on the txload/rxload in the interface to have QoS kick in?

    Class-map: TRANSACTIONAL (match-all)
      23896 packets, 4597078 bytes
      5 minute offered rate 12000 bps, drop rate 0 bps
      Match: access-group 160
      Priority: 30% (3000 kbps), burst bytes 75000, b/w exceed drops: 0

When is the interface considered congested?

Federico.

Correct Answer by Lei Tian about 6 years 5 months ago

Hi Federico,

No, tx-ring is a software pointer, which points to where the packets physically store in the memory. You can consider it is a FIFO queue. When packet leaves interface, it will be placed on the tx-ring first waiting be sent out. If the number of packets on the tx-ring exceeds the tx-ring-limit, the coming packets will be put in software queue to process.That's why when traffic is bursty, even the txload is very low on the interface, it still can cause interface congestion.

HTH,

Lei Tian

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (4 ratings)
Loading.
Lei Tian Wed, 09/22/2010 - 15:26

Hi Federico,

For one layer policy-map, the interface consider congestion when the interface's tx-ring-limit is full. It generates a back pressure to the software queue. That's when the software queueing starts working.

HTH,

Lei Tian

Federico Coto F... Wed, 09/22/2010 - 15:30

Lei,

Thank you very much, just to be clear you're saying that QoS will not start working until either the txload or rxload is 255/255? Is this correct?

Federico.

Correct Answer
Lei Tian Wed, 09/22/2010 - 15:39

Hi Federico,

No, tx-ring is a software pointer, which points to where the packets physically store in the memory. You can consider it is a FIFO queue. When packet leaves interface, it will be placed on the tx-ring first waiting be sent out. If the number of packets on the tx-ring exceeds the tx-ring-limit, the coming packets will be put in software queue to process.That's why when traffic is bursty, even the txload is very low on the interface, it still can cause interface congestion.

HTH,

Lei Tian

Lei Tian Wed, 09/22/2010 - 16:19

Hi Federico,

Yes, it is configurable on interface using 'tx-ring-limit' command.

Regards,

Lei Tian

milan.kulik Thu, 09/23/2010 - 01:16

Hi Lei,

what about policing/shaping?

They are active all the time, no matter of Tx-Ring status, correct?

BR,

Milan

Lei Tian Thu, 09/23/2010 - 03:15

Hi Milan,

That is correct. Shaping and Policing are not considered as queuing method, so they are active all the time. The difference is shaper can generate back pressure when traffic exceeds shaper's CIR. That's why shaping is used on HQOS parent level.

HTH,

Lei Tian

milan.kulik Thu, 09/23/2010 - 04:14

Hi Lei,

thanks for your confirmation.

So the backpressure means the packet is kept in an output queue (the same queue as in a Tx-Ring congestion case)  while shaped?

What I also never understood completely is the "leaking token bucket" always used in shaping explanations.

I know it's an analogy only, but how is it implemented in fact?

I can imagine it can be done by some buffers but how do they change the size in time?

Don't you have some good document available explaining those details, please?

Thanks,

Milan

francisco_1 Thu, 09/23/2010 - 05:49

Guys,

I find this post interesting..

Regarding the leaking token bucket in shaping, when a packet conforms to the average rate per interval it is dequeued to the trasmit ring, and the number of tokens in the bucket equal to the size of the packet is sent in bits is deducted. Now if a packet is exceeded there are not enough tokens in the bucket to dequeued it to the trasmit ring, the shaping delays the packets and holds it in the internal shaping queue causing periods of delay by the non-conforming traffic in the shaping queue results in the overall average rate (CIR) beling lower that Access rate. This method is known the leaky bucket algorithm.

Based of the leaky token bucket algorithm, no more than Bc (burst of traffic) cab be sent per Tc (commited time to send a burst of traffic). so the solution be excess burst, is dual leaky bucket, with the first token bucket represent as commited burst bc and the second token bucket as Excess burst be to the excess burst bucket is only filled in the case the bc bucket was not emptied in the previous interval, the extra credit left over from the bc bucket os them moved to the be bucket before bc is refilled.

During the next internal, the scheduler can now de-queue up to bc+be bit.

Hope that makes sense.

Francisco

Lei Tian Thu, 09/23/2010 - 06:06

Hi Milan,

So the backpressure means the packet is kept in an output queue (the same queue as in a Tx-Ring congestion case)  while shaped?

Yes, that is correct.

What I also never understood completely is the "leaking token bucket" always used in shaping explanations.

I know it's an analogy only, but how is it implemented in fact?

I can imagine it can be done by some buffers but how do they change the size in time?

In theory, 'leaky bucket' and 'token bucket' are different, but in Cisco's doc, that terms are interchangeable. A token bucket doesn't smooth the traffic, as long as there is token available, traffic will be sent out. It is possible all grant tokens be sent out in one time interval. A leaky bucket, on the other hand, does smooth the traffic; if the traffic rate exceeds the CIR, it will be queued. The queue depth is the bc value configured on the shaper.

Unfortunately, I don't have a good document explains those in detail. You can take a look this one

http://www.cisco.com/en/US/tech/tk543/tk545/technologies_tech_note09186a00800a3a25.shtml

HTH,

Lei Tian

francisco_1 Thu, 09/23/2010 - 06:29

"A leaky bucket, on the other hand, does smooth the traffic; if the traffic rate exceeds the CIR, it will be queued. The queue depth is the bc value configured on the shaper"

Lei, according to your statement above are you saying the leaky bucket queue depth is bc value configured on the shaper? is it not the be value?

Francisco.

Lei Tian Thu, 09/23/2010 - 11:39

Ha, good catch Francisco. Yes, when bc and be  both configured, the queue depth will be bc+be.  The software queue will kick in after bc+be is full. When be=0, the  queue depth is bc; software queue will kick in after bc is full. To avoid more confusion, I shouldn't say queue depth, it is the bucket depth.

HTH,

Lei Tian

Federico Coto F... Fri, 09/24/2010 - 08:26

I want to ask you guys another question related to the original post.

The same scenario and router:

ARCUS-UP1-RT-1#sh int gig 0/0
GigabitEthernet0/0 is up, line protocol is up
  Hardware is MV96340 Ethernet, address is 0025.45f2.09a0 (bia 0025.45f2.09a0)
  Internet address is x.x.x.x/30
  MTU 1500 bytes, BW 10000 Kbit/sec, DLY 100 usec,
     reliability 255/255, txload 47/255, rxload 16/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is T
  output flow-control is XON, input flow-control is XON
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/4/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/0 (size/max total/drops)
  5 minute input rate 634000 bits/sec, 180 packets/sec
  5 minute output rate 1864000 bits/sec, 246 packets/sec
     20852976 packets input, 694202621 bytes, 0 no buffer
     Received 6267 broadcasts, 0 runts, 0 giants, 3 throttles
     76 input errors, 0 CRC, 0 frame, 0 overrun, 76 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     25851058 packets output, 966213560 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

I have cleared the counters on this interface and the highlighted errors above increment from time to time.

The input/ignored errors

This is the explanation that I've found for throttles and for ignored errors:

Throttle - This counter indicates the number of times the input buffers of an interface have been cleaned because they have not been serviced fast enough or they are overwhelmed. Typically, an explorer storm can cause the throttles counter to increment. It's important to note that every time you have a throttle, all the packets in the input queue get dropped. This causes very slow performance and may also disrupt existing sessions.

Ignored - Number of received packets ignored by the interface because the interface hardware ran low on internal buffers.

The no buffer option I've seen it incrementing as well...

Question...

What should I be looking at to determine if this is causing the problem and what can be done to fix it?

Thank you!

Federico.

Federico Coto F... Fri, 09/24/2010 - 08:42

I didn't explain the problem...

A TCP application that uses HTTPs continues to drop all-day-long.

Is an application from the LAN to a remote server (going through the Internet).

All traffic works perfectly with the exception of this traffic.

I'm trying to determine what could be causing the problem (we've seen TCP drops, retransmissions, all seems to indicate congestion).

Federico.

Lei Tian Fri, 09/24/2010 - 11:10

Hi Federico,

I don't think that's qos problem. Can you check the output of show interface stats, do you see lot process switched? Only process switched traffic will be put on input queue, I would try to find out whether  this type traffic has been process switched.

Tune the buffer might also help, but I would check whether the traffic has been process switched first.

http://www.cisco.com/en/US/customer/products/hw/routers/ps133/products_tech_note09186a00800a7b80.shtml

HTH,

Lei Tian

Federico Coto F... Fri, 09/24/2010 - 11:20

#sh int stats
GigabitEthernet0/0
          Switching path    Pkts In   Chars In   Pkts Out  Chars Out
               Processor     337000   31567901     274833   29361560
             Route cache   23608968 2487058066   29071572 3694126557
                   Total   23945968 2518625967   29346405 3723488117

Seems there are some packets process-switched, but CEF is enabled on the interface.

I'll check the buffer link and let you know thanks.

Federico.

Lei Tian Fri, 09/24/2010 - 18:26

Hi Federico,

It is normal to have some traffic been process switched, like control plane traffic. The transit traffic should be cef switched unless it cannot be cef switched. For example, packet with ttl=1, packet with destination to the device, fragment packets need to be reassemble...

I think you can configure a ACL match the traffic having problem, and run debug ip packet for this ACL. The output can tell you whether this type of traffic is process switched.

Regards,

Lei Tian

ROBERTO TACCON Sat, 09/25/2010 - 03:54

Hi Lei,

please can you indicate a configuration example or any procedure/command (using the Control Plane Policy) to check which packets will be process switches ?

Thanks in advace

Roberto Taccon

Actions

This Discussion