cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1026
Views
14
Helpful
8
Replies

7200 output drops

iceman6684
Level 1
Level 1

Noticed  that a point to point OC3 circuit has a steady output drops without any other errors or  bandwidth issues. thoughts?

POS1/0 is up, line protocol is up

  Hardware is Packet over Sonet

  Internet address is 10.20.54.1/29

  MTU 4470 bytes, BW 155000 Kbit, DLY 100 usec,

     reliability 255/255, txload 88/255, rxload 13/255

  Encapsulation HDLC, crc 16, loopback not set

  Keepalive set (10 sec)

  Scramble disabled

  Last input 00:00:00, output 00:00:00, output hang never

  Last clearing of "show interface" counters 00:24:01

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3041

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 8149000 bits/sec, 5197 packets/sec

  5 minute output rate 53735000 bits/sec, 6888 packets/sec

     6656713 packets input, 1486058031 bytes, 0 no buffer

     Received 145 broadcasts, 0 runts, 0 giants, 0 throttles

              0 parity

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort

     8450106 packets output, 3048884844 bytes, 0 underruns

     0 output errors, 0 applique, 0 interface resets

     0 output buffer failures, 0 output buffers swapped out

     0 carrier transitionsPOS1/0 is up, line protocol is up
  Hardware is Packet over Sonet
  Internet address is 10.20.54.1/29
  MTU 4470 bytes, BW 155000 Kbit, DLY 100 usec,
     reliability 255/255, txload 88/255, rxload 13/255
  Encapsulation HDLC, crc 16, loopback not set
  Keepalive set (10 sec)
  Scramble disabled
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters 00:24:01
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3041
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 8149000 bits/sec, 5197 packets/sec
  5 minute output rate 53735000 bits/sec, 6888 packets/sec
     6656713 packets input, 1486058031 bytes, 0 no buffer
     Received 145 broadcasts, 0 runts, 0 giants, 0 throttles
              0 parity
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     8450106 packets output, 3048884844 bytes, 0 underruns
     0 output errors, 0 applique, 0 interface resets
     0 output buffer failures, 0 output buffers swapped out
     0 carrier transitions

8 Replies 8

Vignesh Rajendran Praveen
Cisco Employee
Cisco Employee

Hi,

Kindly check if any policy-map is applied to the interface. If yes, kindly provide the outputs of "show run int pos1/0" & "show policy-map interface pos1/0" from the device for further assistance.

Thanks & Regards,

Vignesh R P

there are none on either end

Hi,

Thanks a lot for the confirmation. Now looking at the output of "show int pos 1/0" i see that the output Queue size is left to the default value of 40. This may not be the right value to handle traffic which may be bursty in nature and would definitely result in output drops. Hence as a safe practice I would recommend you to increase this size to 512 first & if it does not help then to 1024, clear the interface counters & then monitor for further output drops.

Use the below configuration under the pos 1/0 to achieve the above suggested.

"hold-queue 512 out"

(or)

"hold-queue 1024 out"


***********Plz do rate this post if you found it helpful*************************


Thanks & Regards,


Vignesh R P

thats what i was thinking as well according to a document i saw earlier.

Hi,

Yes. That is what I usually advice my customers to do in such scenarios. I believe it should work out for you too.

***********Plz do rate this post if you found it helpful*************************


Thanks & Regards,


Vignesh R P

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

3k  drops out of 8m packets, likely doesn't create any performance issues.   However, for an OC3, the default egress queue of 40 is probably to  shallow.  Trying increasing your out queue, initially don't exceed about  half your BDP (bandwidth delay product).

i raised to 1024 and it seems better with only 20 drops.  However, how do i find a BDP on the 155mb circuit?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

i raised to 1024 and it seems better with only 20 drops.  However, how do i find a BDP on the 155mb circuit?

It's your ping time multiplied by your bandwidth.  (Note - there's lots of information on BDP if you search on Internet.  I'm recommending half BDP on the router because it's not the end host and TCP usually doubles its send window per RTT during slow start.)

For example, say your ping time is 100ms.  That would be 155,000,000 (bps) * .1 (RTT) / 8 (bits/byte) / 1500 (bytes per MTU) / 2 (half BDP) = 646.8 packets.

In your case, unless yours is international end-to-end, I would expect your RTT to be less than 100 ms.

If your MTU isn't 1500 (PMTDU enabled?), then your queue depth would be larger with smaller packets.

Understand if your source and receiver have bandwidth connections greater than 155 Mbps, or there's an aggregate of senders, microbursting often will overrun your bandwidth in short bursts causing drops.  Using BDP allows for optimal buffer allocation.

Allocating an overly large queue might decrease drops or it might actually increase them.  The latter if sources don't see any drops when they reach the link's bandwidth capacity.  Remember drops is the original method TCP uses to "discover" available bandwidth.  Also if the queue is too large, you introduce queuing latency.

Since packet dropping is a TCP flow control mechanism, you can "optimize" it by RED ([IMO very] difficult to tune) or do per flow tail drops (which avoids global drop sync and targets the flows causing the drops).  The latter can usually be accomplished by fair queue.

If you device is a router [edit - laugh just noticed subject is "7200 ...", so guess we can indeed assume yours is a router], and not a switch (likely the former for an OC3), CBWFQ as:

policy-map Sample

class class-default

fair-queue

interface OC3

service-policy output Sample

might be worth trying.

Depending on how your device default queue depth for CBWFQ FQ, you might need to adjust (if device allows) queue limits within the policy.

Again, at best, even with the most optimal configuration, as long as sender(s) oversubscribe transit bandwidth, drops are always possible.  The best you can do is try to insure the minimal amount of drops.

As a general rule of thumb, a drop rate of 1% or less is usually not adverse to most applications (your originally stats seemed well under that).

PS:

BTW, it used to be drops were less common (on "fast" WANs) under older TCP implementations because such TCP hosts didn't provide a RWIN to allow for BDP  (i.e. the sender was bandwidth capped ).  Newer hosts, though, can, in theory, often drive any WAN link up to host's local bandwidth capacity (or try to). Newer TCP implementations often try to also determine available end-to-end bandwidth by watching for spikes in RTT in their ACK packets, but they still also use drops for flow control too.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: