cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7346
Views
0
Helpful
12
Replies

Ignored Packets - No Buffer

scottyd
Level 1
Level 1

We have several switches are a not performing well.

At random times they lose packets and cause bad performance on the predominantly Citrix environment.

I am seeing packets being ignored / discarded on the switches that uplink from the core switches. The core switches are 3750s. Each switch as an uplink to each core switch for redundancy using spanning tree. There are only packets being ignored on the active uplink of the dist switch and not on the core switch ports. The switches are 2950 or 2960s an often have other switches feeding off them. Below is some data from the interface of the switch.

I have tried to use Wireshark and spanned the uplink port of the core to see if there is some strange traffic going on but have not seen anything yet.

Does anyone have any ideas to help me trouble shoot this? Should I have to tune the buffers? I notice that there is no input errors. The switches appear to be not very busy, but they do have about 20 VLANs. Most are using around 40% mem.

-------------------------------

Switch#sh int g0/1

GigabitEthernet0/1 is up, line protocol is up (connected)

Hardware is Gigabit Ethernet, address is 0013.1925.8319 (bia 0013.1925.8319)

MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

reliability 255/255, txload 1/255, rxload 1/255

Encapsulation ARPA, loopback not set

Keepalive set (10 sec)

Full-duplex, 1000Mb/s, media type is RJ45

input flow-control is off, output flow-control is off

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:00, output 00:00:01, output hang never

Last clearing of "show interface" counters 1d04h

Input queue: 2/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 394000 bits/sec, 118 packets/sec

5 minute output rate 38000 bits/sec, 66 packets/sec

23927894 packets input, 1989794599 bytes, 14942 no buffer

Received 2992815 broadcasts (0 multicast)

0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 14942 ignored

0 watchdog, 1573714 multicast, 0 pause input

0 input packets with dribble condition detected

7798098 packets output, 652429272 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier, 0 PAUSE output

0 output buffer failures, 0 output buffers swapped out

Switch#sh buffer

Buffer elements:

499 in free list (500 max allowed)

59615239 hits, 0 misses, 0 created

Public buffer pools:

Small buffers, 104 bytes (total 34, permanent 25, peak 109 @ 3w2d):

34 in free list (20 min, 60 max allowed)

75426070 hits, 2148 misses, 3450 trims, 3459 created

9 failures (0 no memory)

Middle buffers, 600 bytes (total 15, permanent 15, peak 51 @ 1w1d):

13 in free list (10 min, 30 max allowed)

387876 hits, 484 misses, 776 trims, 776 created

8 failures (0 no memory)

Big buffers, 1524 bytes (total 6, permanent 5, peak 11 @ 3w2d):

6 in free list (5 min, 10 max allowed)

171931 hits, 28 misses, 301 trims, 302 created

0 failures (0 no memory)

VeryBig buffers, 4520 bytes (total 4, permanent 0, peak 4 @ 1w5d):

4 in free list (0 min, 10 max allowed)

157534 hits, 2 misses, 0 trims, 4 created

0 failures (0 no memory)

Large buffers, 5024 bytes (total 0, permanent 0):

0 in free list (0 min, 5 max allowed)

0 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Huge buffers, 18024 bytes (total 0, permanent 0):

0 in free list (0 min, 2 max allowed)

0 hits, 0 misses, 0 trims, 0 created

0 failures (0 no memory)

Interface buffer pools:

Calhoun Packet Receive Pool buffers, 1560 bytes (total 256, permanent 256):

222 in free list (0 min, 256 max allowed)

67494813 hits, 0 misses

---------------------------------------

12 Replies 12

Joseph W. Doherty
Hall of Fame
Hall of Fame

Switch#sh int g0/1

GigabitEthernet0/1 is up, line protocol is up (connected)

Hardware is Gigabit Ethernet, address is 0013.1925.8319 (bia 0013.1925.8319)

MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec,

reliability 255/255, txload 1/255, rxload 1/255

Encapsulation ARPA, loopback not set

Keepalive set (10 sec)

Full-duplex, 1000Mb/s, media type is RJ45

input flow-control is off, output flow-control is off

ARP type: ARPA, ARP Timeout 04:00:00

Last input 00:00:00, output 00:00:01, output hang never

Last clearing of "show interface" counters 1d04h

Input queue: 2/75/0/0 (size/max/drops/flushes); Total output drops: 0

Queueing strategy: fifo

Output queue: 0/40 (size/max)

5 minute input rate 394000 bits/sec, 118 packets/sec

5 minute output rate 38000 bits/sec, 66 packets/sec

23,927,894 packets input, 1989794599 bytes, 14942 no buffer

Received 2,992,815 broadcasts (0 multicast) better than 10%?

0 runts, 0 giants, 0 throttles

0 input errors, 0 CRC, 0 frame, 0 overrun, 14942 ignored

0 watchdog, 1573714 multicast, 0 pause input

0 input packets with dribble condition detected

7798098 packets output, 652429272 bytes, 0 underruns

0 output errors, 0 collisions, 0 interface resets

0 babbles, 0 late collision, 0 deferred

0 lost carrier, 0 no carrier, 0 PAUSE output

0 output buffer failures, 0 output buffers swapped out

no buffers

Gives the number of received packets discarded because there was no buffer space in the main system. Compare this with the ignored count. Broadcast storms on Ethernet networks and bursts of noise on serial lines are often responsible for no input buffer events

ignored

Shows the number of received packets ignored by the interface because the interface hardware ran low on internal buffers. These buffers are different from the system buffers mentioned previously in the buffer description. Broadcast storms and bursts of noise can cause the ignored count to be increased.

Your system buffers might benefit from some fine tuning, but they don't look to be the cause of your drops. As the above notes, you might be having broadcast storm issues.

I also highlighted, in your stats snapshot, 2 packets in the input queue. It's unusual for packets to be queued there since it indicates the device isn't keeping up with the inbound packet rate, yet your 5 minute interface average is low.

If you are having broadcast storms, they can cause lots of adverse effects (to all hosts). I.e. it might not just be the number of packets lost, but these other effects that even more so might account for your occasional poor performance.

It's also possible a high volume of broadcasts is legitmate, if it is, you'll likely need to reduce the size of your broadcast domain, if possible.

Hi Joseph,

Thanks for the info. That confirms what I think also. But why do some switches that uplink from the same switch not have the dropped packets? It seems the more the load on the switch the more packets lost. Some switches have not lost any. Also, so far I have not been able to capture any strange broadcast traffic using span on the core switch.

Do you have any other suggestions, any debugs I can enable?

Thanks

Why all switches not behaving the same? I see in your post to Vishwa you mention VLANs. VLAN should contain broadcasts, so perhaps your topology has high broadcasts in one VLAN and not others. Also, the switch itself could be adversely impacted if its management address is on a VLAN that sees the high broadcasts (one reason a separate management VLAN is recommended).

You could also compare the received broadcast counter on other uplink interfaces and see if they too appear better 10%.

High volume of broadcasts that the switch itself sees could drive the CPU of the switch high which might result in dropped packets. Recall I wondered why there were 2 packets in the input queue when the 5 minute load appeared so low.

Something else to consider, all your switches exactly the same model running exactly the same software?

If not already doing so, if your switches support the feature, you might try activation of broadcast storm control and see if it makes any difference.

Assuming there are broadcast storms are happening, they might be very brief which makes it hard to find their cause.

I can't think of any other suggestions beyond you might want to make a new post asking about suggestions on running down a possible broadcast storm.

Thanks for the comments.

The management is on a different VLAN.

All switches are running the same (stable) IOS, at least the same models. There are 2950s and 2960s.

I just managed to capture when there was a problem and there appears not be significant broadcasts. But there is a lot of "TCP retransmissions" between an Exchange server and a back up server. I am not sure if this is a cause or a symptom.

Scotty

TCP retransmissions would be indicative of lost packets. Likely another symptom.

vishwancc
Level 3
Level 3

Hi,

Could you tell if the 2960 switches are interconnected or only uplink to the core.

If they are interconnected and trunked please allow only the necessary vlan on the trunks.

if all the vlan are allowed it will cretaed unnecessary traffic.

Chao

Vishwa

Hi Vishwa,

The 2960 switches are up linked by gigabit over fibre connections. They are trunked. We have only allowed the VLANs required on each, about 6.

Thanks

Scotty

Hi Scotty,

You said (predominantly Citrix environment),

and you do not see same problem on all the uplink,Could you tell if the switches you see this problem are connect to Citrix environment .

Chao

Vishwa

Hi Vishwa,

Yes that is right on the core switch there is no packet loss the traffic is outbound. So it makes me wonder if the core switch is causing the storm, or else the switch is so powerful that it can handle the storm.

All switches pass citrix traffic, if that is what you mean.

Scotty

Hi Scott,

I think that 2950 are not able to handle the citrix traffic as well as 3750.

If you have a spare 3750 you could replace it with 2950 and have a check to confirm that.

Chao

Vishwa

DouglasScott99
Level 1
Level 1

There is a decent chance that the ignored traffic is traffic destined for the CPU and is not user data traffic. e.g. broadcasts. (or maybe multicasts? too)

You might be able to test this by setting up a couple of pings, one to the switch and one through the switch and waiting to see which one dropped traffic. If it is traffic heading for the cpu then there is not much you can do about it. I suppost that you might be able to construct an ACL to block the offending traffic but make sure you do not block STP or ARP or necessary multicast traffic.

jos-sanchez
Level 1
Level 1

Hello,

I work with many kind of Cisco switches, and I have observed the sames errors on the series C2950-24, but only in models based on Rev A0 and Rev B0.

I don't found any technical information about differences between the hardware revisions. May be a different amount of memory at ASIC's port level ?

Best regards

José Sanchez, SIEN, Switzerland

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco