cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 
cancel
7313
Views
0
Helpful
18
Replies

Catalyst 2960s Problems

ciscomagu
Level 1
Level 1

Hi,

I wounder if there is any solution about high CPU, drops and packet loss on Cisco Catalyst 2960s yet.

We are using  4 four 2960s in a stack environment and have some problem. The CPU is high and we have port-asic drop and packet loss. When I search on Internet, there are many with same problem. This should be a big issue for Cisco to solve!

Supervisor TxQueue Drop Statistics

Queue  0: 29053

Queue  1: 0

Queue  2: 0

Queue  3: 103431

Queue  4: 0

Queue  5: 0

Queue  6: 0

Queue  7: 0

Queue  8: 579416

Queue  9: 0

Queue 10: 0

Queue 11: 0

Queue 12: 0

Queue 13: 0

Queue 14: 10478748

Queue 15: 0

8888889989899888889888888888889888888988888898888998899889888898898989

7955594760921878562798784798557757576469487928578506999769775537728153

100        *                      *                  *   **  *

90 ************************ *************** *****************************

80 #*#####*#############################################**##*############

70 #*###################################################**###############

60 #*###################################################**###############

50 ######################################################################

40 ######################################################################

30 ######################################################################

20 ######################################################################

10 ######################################################################

0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.

0    5    0    5    0    5    0    5    0    5    0    5    0

CPU% per hour (last 72 hours)

              • = maximum CPU%   # = average                                    CPU%

Many thanks for any comments

Best Regards

/Magnus

18 Replies 18

andtoth
Level 4
Level 4

Hi,

On Catalyst switches you should not observe high CPU utilization caused by traffic in normal cases.

In order to have a better insight on what's happening on the device, please attach the output of the following commands:

- show processes cpu

- show platform tcam utilization

- show sdm prefer

- show ip traffic

I would recommend you to check the following guide in order to help you find out the root cause and solve the problem:

Catalyst 3750 Series Switches High CPU Utilization Troubleshooting

http://www.cisco.com/en/US/products/hw/switches/ps5023/products_tech_note09186a00807213f5.shtml

The 3750 and 2960 switches are running a similar IOS software so feel free to refer to that guide.

Andras

Hi,

Thanks for your reply,

Below you have the attached files you requested.

The CPU usage is 80% before I telnet them.

It dropped to 50% when I using telnet to login. And raised to above 80% after logout.

Sometimes the CPU go up to over 90%. In this situation the packet loss appear.

I hope you can see something from the attached files.

Best Regards

/Magnus

Hello,

If you are using the switch at the distribution layer and it has many routes. The problem could be the sdm template you are using.

I see from your TCAM utilization output that the TCAM resources for routing are overhelmed. If you have a high traffi throughput, all the routing decissions are done in software, raising the CPU load.

Please, perform a show ip route summary and if the number of subnets is higher than 320 (your current TCAM limit), change the sdm template with the "sdm prefer routing" template.

Another helpful command to see if the problem is within the TCAM resources is "show platform ip unicast failed adjacency". If it shows any subnet, then you have to change your sdm template.

Hope this helps.

Hi,

We don't use IP routing in the switch (no support), It's a very default config.

This commands you recommend me to do, they don't exist.

The software version is:

ROM: Bootstrap program is Alpha board boot loader

BOOTLDR: C2960S Boot Loader (C2960S-HBOOT-M) Version 12.2(53r)SE, RELEASE SOFTWARE (fc3)

System returned to ROM by power-on

System restarted at 09:44:11 UTC Tue Oct 5 2010

System image file is "flash:/c2960s-universalk9-mz.122-53.SE2/c2960s-universalk9-mz.122-53.SE2.bin"

Hardware:

4 cisco WS-C2960S-48TS-L in a stack

When we telnet into the switch the CPU lower from 80% to 50%

Regards

/Magnus

Could you please attach an output of 'sh run' and 'sh ip route' from the switch?

Hi,

Here is the attached config file.

The command sh ip route don't exist!

Hi ,

I have the same problems about hight cpu utilisation in Catalyst 2960S-48FPS-L Switch with the version 12.2(55r)SE.can you please help me????

Hi,

We have new information about the case. Last night we had a terminal session connected to the switch  the whole time. During this time there was no packet loss, the heartbeat between teamed server cards were working great.

When we had the terminal session connected the CPU was low. (50%)

Best Regards

/Magnus

Hi,

The fact that CPU utilization is high when not connected on telnet/console to the switch is probably caused by the following software defect: CSCth24278

High CPU when no Console/VTY activity

Catalyst 2960S switch may report elevated CPU utilization (e.g., 50%) under
normal conditions.

This will be fixed in IOS version 12.2(58)SE .

For more information, please refer to the Bug Toolkit Page on the following link:

http://tools.cisco.com/Support/BugToolKit/search/getBugDetails.do?method=fetchBugDetails&bugId=CSCth24278

However, if your CPU utilization is 90% probably traffic is hitting the CPU. Could you please collect the output of the 'show controllers cpu-interface' command when you're observing high CPU utilization? Please capture it a few times to see which counters are increasing.

Andras

Hi,

Looks like the problem is that you're using IP Routing on the 2960 switch but you did not change the SDM (Switch Database Manager) Template in order for the TCAM to store routes. Since there's no space allocated in the hardware for IPv4 indirect unicast routes (refer to sh platform tcam util command), the switch will forward/route packets in software, that's causing the high CPU utilization. You will need to change to LAN Base Routing template in order to allocate TCAM (hardware) space for Unicast Indirect routes.

Please check the Configuring SDM Templates on Catalyst 2960 Switches on the following link:

http://www.cisco.com/en/US/docs/switches/lan/catalyst2960/software/release/12.2_55_se/configuration/guide/swsdm.html

For more info about the SDM Templates, please check the following guide:

Understanding and Configuring Switching Database Manager on Catalyst 3750 Series Switches

http://www.cisco.com/en/US/products/hw/switches/ps5023/products_tech_note09186a00801e7bb9.shtml

Andras

Hi. I am experiencing the exact same problem

with a stack of 4 x 2960S-48FPS-L

Switch Ports Model              SW Version            SW Image
------ ----- -----              ----------            ----------
*    1 52    WS-C2960S-48FPS-L  12.2(53)SE2           C2960S-UNIVERSALK9-M
     2 52    WS-C2960S-48FPS-L  12.2(53)SE2           C2960S-UNIVERSALK9-M
     3 52    WS-C2960S-48FPS-L  12.2(53)SE2           C2960S-UNIVERSALK9-M
     4 52    WS-C2960S-48FPS-L  12.2(53)SE2           C2960S-UNIVERSALK9-M

There is no routing on this stack

CPU can go over 90%, but is generally around 50%. Packet loss happens when CPU goes over 90%.

Is the "the need change to LAN Base Routing template in order to allocate TCAM (hardware) space for Unicast Indirect routes" even if routing is no available on the switch?

Ian

If you are not using IP routing on the switch, there's no need to change the SDM template.

Could you please collect the output of the 'show controllers cpu-interface' command when you're observing high CPU utilization? Please capture it a few times to see which counters are increasing.

Hi,

I have communicate with another Discussion thread about C2960S problem and the answer I got from there is:

ā€œThe high CPU bug is still waiting for 12.2(58).  But there's another bug CSCtg77276 which affects 12.2(53) after 6 weeks of uptime.  Although the public case notes on CSCtg77276 don't exactly mention it, my Cisco engineer informs me it could cause packet loss. Upgrading to 12.2(55) fixed our packet loss problem -- but the high CPU bug is still thereā€

/Magnus

Yes, you're right. CSCth24278 will be fixed in 12.2(58)SE, I've modified my previous post accordingly.

I think we're listing too many issues here and it's getting confusing. It would be better to handle these issues in separate threads. This thread was originally opened for high CPU utilization causing some packet drops and also high cpu when not connected to the console. The console issue is covered by CSCth24278 and if the utilization is higher than that, 'show controllers cpu-interface' output should be collected.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card