cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2039
Views
15
Helpful
19
Replies

QoS question

Hi,

Have this Gig interface directly connected to a 10Mb link to the ISP.

GigabitEthernet0/0 is up, line protocol is up
  Hardware is MV96340 Ethernet, address is 0025.45f2.09a0 (bia 0025.45f2.09a0)
  Internet address is x.x.x.x/30
  MTU 1500 bytes, BW 10000 Kbit/sec, DLY 100 usec,
     reliability 255/255, txload 75/255, rxload 61/255

There's QoS configured outbound for a TCP application to provide 30% of interface bandwidth. 

interface GigabitEthernet0/0
bandwidth 10000
service-policy output ARCUS-QOS

Which values do I have to see on the txload/rxload in the interface to have QoS kick in?

    Class-map: TRANSACTIONAL (match-all)
      23896 packets, 4597078 bytes
      5 minute offered rate 12000 bps, drop rate 0 bps
      Match: access-group 160
      Priority: 30% (3000 kbps), burst bytes 75000, b/w exceed drops: 0

When is the interface considered congested?

Federico.

1 Accepted Solution

Accepted Solutions

Hi Federico,

No, tx-ring is a software pointer, which points to where the packets physically store in the memory. You can consider it is a FIFO queue. When packet leaves interface, it will be placed on the tx-ring first waiting be sent out. If the number of packets on the tx-ring exceeds the tx-ring-limit, the coming packets will be put in software queue to process.That's why when traffic is bursty, even the txload is very low on the interface, it still can cause interface congestion.

HTH,

Lei Tian

View solution in original post

19 Replies 19

Lei Tian
Cisco Employee
Cisco Employee

Hi Federico,

For one layer policy-map, the interface consider congestion when the interface's tx-ring-limit is full. It generates a back pressure to the software queue. That's when the software queueing starts working.

HTH,

Lei Tian

Lei,

Thank you very much, just to be clear you're saying that QoS will not start working until either the txload or rxload is 255/255? Is this correct?

Federico.

Hi Federico,

No, tx-ring is a software pointer, which points to where the packets physically store in the memory. You can consider it is a FIFO queue. When packet leaves interface, it will be placed on the tx-ring first waiting be sent out. If the number of packets on the tx-ring exceeds the tx-ring-limit, the coming packets will be put in software queue to process.That's why when traffic is bursty, even the txload is very low on the interface, it still can cause interface congestion.

HTH,

Lei Tian

Great thanks!

One last question... can this tx-ring-limit be modified?


Federico.

Hi Federico,

Yes, it is configurable on interface using 'tx-ring-limit' command.

Regards,

Lei Tian

Hi Lei,

what about policing/shaping?

They are active all the time, no matter of Tx-Ring status, correct?

BR,

Milan

Hi Milan,

That is correct. Shaping and Policing are not considered as queuing method, so they are active all the time. The difference is shaper can generate back pressure when traffic exceeds shaper's CIR. That's why shaping is used on HQOS parent level.

HTH,

Lei Tian

Hi Lei,

thanks for your confirmation.

So the backpressure means the packet is kept in an output queue (the same queue as in a Tx-Ring congestion case)  while shaped?

What I also never understood completely is the "leaking token bucket" always used in shaping explanations.

I know it's an analogy only, but how is it implemented in fact?

I can imagine it can be done by some buffers but how do they change the size in time?

Don't you have some good document available explaining those details, please?

Thanks,

Milan

Guys,

I find this post interesting..

Regarding the leaking token bucket in shaping, when a packet conforms to the average rate per interval it is dequeued to the trasmit ring, and the number of tokens in the bucket equal to the size of the packet is sent in bits is deducted. Now if a packet is exceeded there are not enough tokens in the bucket to dequeued it to the trasmit ring, the shaping delays the packets and holds it in the internal shaping queue causing periods of delay by the non-conforming traffic in the shaping queue results in the overall average rate (CIR) beling lower that Access rate. This method is known the leaky bucket algorithm.

Based of the leaky token bucket algorithm, no more than Bc (burst of traffic) cab be sent per Tc (commited time to send a burst of traffic). so the solution be excess burst, is dual leaky bucket, with the first token bucket represent as commited burst bc and the second token bucket as Excess burst be to the excess burst bucket is only filled in the case the bc bucket was not emptied in the previous interval, the extra credit left over from the bc bucket os them moved to the be bucket before bc is refilled.

During the next internal, the scheduler can now de-queue up to bc+be bit.

Hope that makes sense.

Francisco

Hi Milan,

So the backpressure means the packet is kept in an output queue (the same queue as in a Tx-Ring congestion case)  while shaped?

Yes, that is correct.

What I also never understood completely is the "leaking token bucket" always used in shaping explanations.

I know it's an analogy only, but how is it implemented in fact?

I can imagine it can be done by some buffers but how do they change the size in time?

In theory, 'leaky bucket' and 'token bucket' are different, but in Cisco's doc, that terms are interchangeable. A token bucket doesn't smooth the traffic, as long as there is token available, traffic will be sent out. It is possible all grant tokens be sent out in one time interval. A leaky bucket, on the other hand, does smooth the traffic; if the traffic rate exceeds the CIR, it will be queued. The queue depth is the bc value configured on the shaper.

Unfortunately, I don't have a good document explains those in detail. You can take a look this one

http://www.cisco.com/en/US/tech/tk543/tk545/technologies_tech_note09186a00800a3a25.shtml

HTH,

Lei Tian

"A leaky bucket, on the other hand, does smooth the traffic; if the traffic rate exceeds the CIR, it will be queued. The queue depth is the bc value configured on the shaper"

Lei, according to your statement above are you saying the leaky bucket queue depth is bc value configured on the shaper? is it not the be value?

Francisco.

Ha, good catch Francisco. Yes, when bc and be  both configured, the queue depth will be bc+be.  The software queue will kick in after bc+be is full. When be=0, the  queue depth is bc; software queue will kick in after bc is full. To avoid more confusion, I shouldn't say queue depth, it is the bucket depth.

HTH,

Lei Tian

I want to ask you guys another question related to the original post.

The same scenario and router:

ARCUS-UP1-RT-1#sh int gig 0/0
GigabitEthernet0/0 is up, line protocol is up
  Hardware is MV96340 Ethernet, address is 0025.45f2.09a0 (bia 0025.45f2.09a0)
  Internet address is x.x.x.x/30
  MTU 1500 bytes, BW 10000 Kbit/sec, DLY 100 usec,
     reliability 255/255, txload 47/255, rxload 16/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is T
  output flow-control is XON, input flow-control is XON
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/4/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: Class-based queueing
  Output queue: 0/1000/0 (size/max total/drops)
  5 minute input rate 634000 bits/sec, 180 packets/sec
  5 minute output rate 1864000 bits/sec, 246 packets/sec
     20852976 packets input, 694202621 bytes, 0 no buffer
     Received 6267 broadcasts, 0 runts, 0 giants, 3 throttles
     76 input errors, 0 CRC, 0 frame, 0 overrun, 76 ignored
     0 watchdog, 0 multicast, 0 pause input
     0 input packets with dribble condition detected
     25851058 packets output, 966213560 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 pause output
     0 output buffer failures, 0 output buffers swapped out

I have cleared the counters on this interface and the highlighted errors above increment from time to time.

The input/ignored errors

This is the explanation that I've found for throttles and for ignored errors:

Throttle - This counter indicates the number of times the input buffers of an interface have been cleaned because they have not been serviced fast enough or they are overwhelmed. Typically, an explorer storm can cause the throttles counter to increment. It's important to note that every time you have a throttle, all the packets in the input queue get dropped. This causes very slow performance and may also disrupt existing sessions.

Ignored - Number of received packets ignored by the interface because the interface hardware ran low on internal buffers.

The no buffer option I've seen it incrementing as well...

Question...

What should I be looking at to determine if this is causing the problem and what can be done to fix it?

Thank you!

Federico.

I didn't explain the problem...

A TCP application that uses HTTPs continues to drop all-day-long.

Is an application from the LAN to a remote server (going through the Internet).

All traffic works perfectly with the exception of this traffic.

I'm trying to determine what could be causing the problem (we've seen TCP drops, retransmissions, all seems to indicate congestion).

Federico.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card