cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
973
Views
0
Helpful
14
Replies

lot of output drops

Adi Arslanagic
Level 1
Level 1

Hello,

I have a problem with a lot of output drops on one of the fastethernet interfaces (router is Cisco 3745)

FastEthernet0/1 is up, line protocol is up

Hardware is Gt96k FE, address is 000d.bc09.cba1 (bia 000d.bc09.cba1)

Description: TRUNK

MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,

reliability 255/255, txload 9/255, rxload 9/255

Encapsulation 802.1Q Virtual LAN, Vlan ID 1., loopback not set

Keepalive set (10 sec)

Full-duplex, 100Mb/s, 100BaseTX/FX

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 1d08h

Input queue: 0/4096/0/0 (size/max/drops/flushes); Total output drops: 2938909

Queueing strategy: fifo

Output queue: 0/4096 (size/max)

5 minute input rate 3848000 bits/sec, 1179 packets/sec

5 minute output rate 3754000 bits/sec, 923 packets/sec

132391896 packets input, 3083523931 bytes

Received 8144821 broadcasts, 0 runts, 0 giants, 0 throttles

1 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog

0 input packets with dribble condition detected

114686595 packets output, 431450282 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier

0 output buffer failures, 0 output buffers swapped out

There are about 50 subinterfaces on this physical Fa0/1

there is an average of 40 drops per second even though output queue is empty

while googling I found a topic with similar situation - http://atm.tut.fi/list-archive/cisco-nsp/msg16023.html

Any ideas?

14 Replies 14

Edison Ortiz
Hall of Fame
Hall of Fame

Ok, let's start fresh ---

clear the counter and change the load-interval to 30 seconds. That value will give us a better indication of the current flow, a 5 minute average does not represent burst traffic that well.

What device is connected at the other end ?

Are you forcing the speed/duplex on each end or going auto/auto ?

__

Edison.

Ok, here it is

Last clearing of "show interface" counters 00:00:15

Input queue: 0/4096/0/0 (size/max/drops/flushes); Total output drops: 608

Queueing strategy: fifo

Output queue: 0/4096 (size/max)

30 second input rate 4017000 bits/sec, 1170 packets/sec

30 second output rate 3946000 bits/sec, 906 packets/sec

19335 packets input, 8250918 bytes

Received 1358 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog

0 input packets with dribble condition detected

14526 packets output, 8096392 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier

0 output buffer failures, 0 output buffers swapped out

608 drops in 15 sec

Device on the other end is Catalyst 2948G, and yes speed and duplex are forced to 100/full

Both input and output queues sized to 4096? Could you post stats from show buffers?

4Mbps shouldn't produce output drops.

Have you tried replacing the cable ?

Can we change from 100/Full Duplex to Auto/Auto at both ends ?

When this problem started ?

Can you post the show proc cpu from the router?

Thanks

__

Edison.

What are you using as the native VLAN on the trunk port on the switch and the router?

First of CPU is very low:

CPU utilization for five seconds: 5%/4%; one minute: 5%; five minutes: 6%

I have been tweaking buffers a bit (small and medium):

Buffer elements:

1115 in free list (500 max allowed)

1785674357 hits, 0 misses, 1119 created

Public buffer pools:

Small buffers, 104 bytes (total 134, permanent 125, peak 502 @ 7w0d):

105 in free list (50 min, 175 max allowed)

3448041090 hits, 603297 misses, 186585 trims, 186594 created

150475 failures (0 no memory)

Middle buffers, 600 bytes (total 75, permanent 75, peak 623 @ 7w0d):

61 in free list (35 min, 135 max allowed)

925561213 hits, 589636 misses, 46420 trims, 46420 created

202647 failures (0 no memory)

Big buffers, 1536 bytes (total 50, permanent 50, peak 74 @ 7w0d):

50 in free list (5 min, 150 max allowed)

316676719 hits, 36790 misses, 832 trims, 832 created

35842 failures (0 no memory)

VeryBig buffers, 4520 bytes (total 10, permanent 10, peak 13 @ 7w0d):

9 in free list (0 min, 100 max allowed)

835 hits, 35009 misses, 39 trims, 39 created

35009 failures (0 no memory)

Large buffers, 5024 bytes (total 1, permanent 0, peak 4 @ 7w0d):

1 in free list (0 min, 10 max allowed)

50 hits, 34959 misses, 5027 trims, 5028 created

34959 failures (0 no memory)

Huge buffers, 18024 bytes (total 1, permanent 0, peak 11 @ 7w0d):

1 in free list (0 min, 4 max allowed)

5767 hits, 34973 misses, 6831 trims, 6832 created

34921 failures (0 no memory)

Native VLAN is VLAN1 on the switch, but I don't have a subinterface with vlan1 configured on the router

The strange thing to is that FastEthernet0/0 interface that serves as an uplink for the subinterfaces configured on fa0/1 does not have any drops:

Last input 00:00:00, output 00:00:00, output hang never

Last clearing of "show interface" counters 1d09h

Input queue: 1/1024/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue: 0/1024 (size/max)

5 minute input rate 3873000 bits/sec, 894 packets/sec

5 minute output rate 3843000 bits/sec, 1055 packets/sec

120868899 packets input, 1879487910 bytes

Received 45350 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog

0 input packets with dribble condition detected

127840254 packets output, 2826664173 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier

0 output buffer failures, 0 output buffers swapped out

same story here - speed/duplex fixed, same endpoint (C2948G)

Also there is a second router (7204VXR) with identical config, and identical problem, so I don't think it is a cable fault.

Thanks for the interest in my problem :)

Input queue: 1/1024/0/0 (size/max/drops/flushes);

Why are the packets sitting in the Input queue on the switch interface. What does the CPU utilization look like on the switch?

This is show interfaces output of the router FastEthernet 0/0 interface, the switch is running CatOS.

The CPU on switch:

CPU utilization for five seconds: 53.75%

one minute: 45.70%

five minutes: 43.19%

It is always ~50% even at very low traffic.

Also there are none drops on switch interfaces.

Lots of buffer failures!

From http://www.cisco.com/en/US/products/hw/routers/ps133/products_tech_note09186a00800a7b80.shtml

If there are no buffers available, and fast switching is enabled, there is a buffer failure and the packet is dropped. When the buffer pool manager process detects a buffer failure, it "creates" a new buffer to avoid future failures.

Possible cause of the counted output drops?

I guess it could be, thats why I've been tweaking the buffers. But the strange thing to me is that Fa0/0 interface that 'mirrors' packets from Fa0/1 is not dropping any packets!?

"But the strange thing to me is that Fa0/0 interface that 'mirrors' packets from Fa0/1 is not dropping any packets!?"

Perhaps because the mirroring, although in/out bits rates are close, a little more separation between in/out packet rates?

The large queue settings of 4,096, both in and out, are these defaults, or have they been set manually? If not defaults, try defaults. If defaults, try 75 in and 40 out.

PS:

If your trying to avoid any drops with large queues, it can be counter productive. With TCP, often drops are the only active flow control mechanism to deal with oversubscription.

I have increased queue depths yesterday to see if it had any effect, but as it seems it doesn't since packets are being dropped without the queue filling up.

Then we might still be dealing with a buffer tuning issue.

I would suggest returning in/out queue sizes back to their defaults (and confirmation of what those defaults might be).

You might also try automatic buffer tuning as suggested in:http://forums.cisco.com/eforum/servlet/NetProf?page=netprof&forum=Network%20Infrastructure&topic=WAN%2C%20Routing%20and%20Switching&CommCmd=MB%3Fcmd%3Dpass_through%26location%3Doutline%40%5E1%40%40.2cbfcc48/0#selected_message

Returned the queue depth to default level, and turned on automatic buffer tunning. I'll report if there are changes.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco