cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
443
Views
0
Helpful
1
Replies

Catalyst 3548XL Underruns / Output Buffer Failures

stephen.duncan
Level 1
Level 1

Summary: Slowly incrementing output underruns and output buffer failures on a Catalyst 3548XL with occasional “upset” client computers that need to be restarted to reconnected to the LAN. These errors are slowly climbing for ALL ports on the switch.

The site in question has 5 Catalyst 3548XL switches connected via 1000baseSX links. The affected switch is connected via a GigaStack link to another identical 3548XL. The affected switch is the only switch reporting the errors over the entire site. All 48 ports are in a common VLAN separate from the management VLAN and there is a range of different client computer types connected to the switch.

The site uses NetBEUI, AppleTalk, and TCP/IP. There is no multicast traffic to speak of – apart from AppleTalk. The background broadcast level is 8 – 10 frames per second. The hosts connected to this switch do very little traffic, and the whole site itself is rather lightly loaded.

The output underruns and output buffer failure errors are incrementing and always equal (the MAC addresses were removed by me):

FastEthernet0/46 is up, line protocol is up

Hardware is Fast Ethernet, address is XXXX.XXXX.XXXX (bia XXXX.XXXX.XXXX)

MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,

reliability 255/255, txload 1/255, rxload 1/255

Encapsulation ARPA, loopback not set

Keepalive not set

Auto-duplex (Full), Auto Speed (100), 100BaseTX/FX

ARP type: ARPA, ARP Timeout 04:00:00

Last input never, output 00:00:00, output hang never

Last clearing of "show interface" counters 00:13:38

Queueing strategy: fifo

Output queue 0/40, 0 drops; input queue 0/75, 0 drops

5 minute input rate 0 bits/sec, 0 packets/sec

5 minute output rate 4000 bits/sec, 8 packets/sec

111 packets input, 76393 bytes

Received 2 broadcasts, 0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

0 watchdog, 1 multicast

0 input packets with dribble condition detected

3656 packets output, 356361 bytes, 21 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier

21 output buffer failures, 0 output buffers swapped out

The number of interface errors matches the output of the Ethernet controllers show command for the transmit site of the interface in question:

Transmit Receive

372777 Bytes 76393 Bytes

103 Unicast frames 109 Unicast frames

1930 Multicast frames 1 Multicast frames

1784 Broadcast frames 1 Broadcast frames

21 Discarded frames 0 No bandwidth frames

0 Too old frames 0 No buffers frames

0 Deferred frames 0 No dest, unicast

0 1 collision frames 0 No dest, multicast

0 2 collision frames 0 No dest, broadcast

0 3 collision frames 0 Alignment errors

0 4 collision frames 0 FCS errors

0 5 collision frames 0 Collision fragments

0 6 collision frames

0 7 collision frames 0 Undersize frames

0 8 collision frames 22 Minimum size frames

0 9 collision frames 22 65 to 127 byte frames

0 10 collision frames 12 128 to 255 byte frames

0 11 collision frames 9 256 to 511 byte frames

0 12 collision frames 0 512 to 1023 byte frames

0 13 collision frames 46 1024 to 1518 byte frames

0 14 collision frames 0 Oversize frames

0 15 collision frames

0 Excessive collisions

0 Late collisions

As I noted, no other Catalyst 3548XL switches show this error pattern, not even the switch directly connected to the affected unit via a GigaStack connection. Debugging output didn’t provide too much more information with nothing out of the ordinary taking place with regards to interface flapping, MAC address table flaps, spanning tree, etc. Debugging “cpu-interface” and “Ethernet-controller ram” gave the following output:

Oct 19 17:03:36 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:37 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:38 NZDT: start_timer id mismatch satellite dm 3 state 1 id 53F570 exp id 53F6F0

Oct 19 17:03:38 NZDT: start_timer id mismatch satellite dm 3 state 1 id 53F3B0 exp id 53F7F0

Oct 19 17:03:38 NZDT: start_timer id mismatch satellite dm 3 state 1 id 53F6F0 exp id 53F630

Oct 19 17:03:38 NZDT: start_timer id mismatch satellite dm 5 state 1 id 5401F0 exp id 53FEB0

Oct 19 17:03:38 NZDT: start_timer id mismatch satellite dm 5 state 1 id 540170 exp id 53F7B0

Oct 19 17:03:38 NZDT: start_timer id mismatch satellite dm 5 state 1 id 53FEB0 exp id 53F2B0

Oct 19 17:03:38 NZDT: start_timer id mismatch satellite dm 5 state 1 id 53F630 exp id 53F3B0

Oct 19 17:03:39 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:39 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:39 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:39 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:39 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:39 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:39 NZDT: CPU Interface 0 storage notify failed on queue 1

Oct 19 17:03:39 NZDT: start_timer id mismatch satellite dm 5 state 1 id 53F7B0 exp id 53F7F0

Oct 19 17:03:39 NZDT: start_timer id mismatch satellite dm 5 state 1 id 53F630 exp id 53F2B0

Oct 19 17:03:39 NZDT: start_timer id mismatch satellite dm 5 state 1 id 53F7F0 exp id 540170

Oct 19 17:03:39 NZDT: start_timer id mismatch satellite dm 5 state 1 id 53F4F0 exp id 53F670

While other Catalyst 3548XL switches provide similar debugging output, the rate at which these messages are being logged on the affected switch is much faster than another switches on the site.

A search of the TAC troubleshooting information and a bit of “Google-ing” didn’t reveal too much, other than a few other people wondering about these errors as well. An identical post a few months ago here noted that high volumes of multicast traffic can stretch the capacity of the internal shared buffers on the 3548XL and lead to the underrun / output buffer failure errors. High volume multicast is certainly not at the heart of the problem I am seeing on this one switch on our LAN.

The switch has been replaced with another and the problem still persists, so it doesn’t look like a hardware issue. The switches are running the most current version of the IOS, version 12.0(5)WC8. The TAC output interpreter on the “show tech-support” from the switch didn’t give too much away, other than suggesting tuning of the interface buffers – something I assumed is aimed more at the router side of the Cisco product line, and not relevant here given that there is such a light traffic load.

The output interpreter had nothing to say for the output of the “show stacks”, “show memory”, or “show buffers” commands. Given that the error doesn’t “propagate” to the upstream switch I suspect that a particular broadcast/multicast frame generated by a client computer is being send to all the other ports on the switch and being discarded, although any clues would be useful if someone has seen this issue before and determined the root cause.

Each port on the switch shares an identical configuration:

interface FastEthernet0/1

switchport access vlan XXX

spanning-tree portfast

no cdp enable

end

and lastly, there aren’t any client switches or repeaters connected to any of the ports – this was one of the first things I checked since I’m sure we all know how “helpful” eager end users can be. I would be willing to ignore the errors given the fairly low turn over, although this is the only switch anywhere on our LAN exhibiting this fault, and we are having problems with end user computers “falling off the net” from time to time – noting this affects different client computers, rather than any specific problematic client computers.

1 Reply 1

owillins
Level 6
Level 6

I don't think the under runs and buffer failures would affect the switch performance. You might want to take a look at bugs CSCdt69894 and CSCdv11799, since you have ruled out other possibilities.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: