Catalyst 2960S problem

Unanswered Question
Jun 17th, 2010
User Badges:

Hi,


I have 4 Catalyst 2960S, and they have the same problem.

The CPU usage is 60% before I telnet them.

It fell to 20% when I using telnet to login. And raised to above 60% after logout.

2010-06-18_095534.png

This is the output of 'show proc cpu' when I login.

2010-06-18_100251.png


This is the output of 'show proc cpu' 2 mins later.

2010-06-18_100804.png

And the traffic chart will like this.

2010-06-18_102131.png


How can I to solve this problem?

The hardware & software version is:

WS-C2960S-48TS-L   12.2(53)SE1           C2960S-UNIVERSALK9-M


Many thanks for any comments.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (3 ratings)
Loading.
Ganesh Hariharan Thu, 06/17/2010 - 22:57
User Badges:
  • Purple, 4500 points or more
  • Community Spotlight Award,

    Member's Choice, February 2016

Hi,


I have 4 Catalyst 2960S, and they have the same problem.

The CPU usage is 60% before I telnet them.

It fell to 20% when I using telnet to login. And raised to above 60% after logout.

This is the output of 'show proc cpu' when I login.


This is the output of 'show proc cpu' 2 mins later.

And the traffic chart will like this.


How can I to solve this problem?

The hardware & software version is:

WS-C2960S-48TS-L   12.2(53)SE1           C2960S-UNIVERSALK9-M


Many thanks for any comments.

Hi,


Check out the below link for troubleshooting high cpu in cisco 2960 switches hope to help !!


http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/troubleshooting/cpu_util.html


Ganesh.H

allen_huang Fri, 06/18/2010 - 21:34
User Badges:

Hi, Genash,


Thanks for your response.

But there is no matched conditions in our 2960S.

I have four 2960S and six 2960G on the same rack and connected to the same 3750-stack.

They have the same configuration.

But only 2960S got this problem.


Allen Huang

mliu00002 Wed, 04/22/2015 - 14:50
User Badges:

If you are running rancid, disable it. it cause high ssh process time on 2960

Leo Laohoo Fri, 06/18/2010 - 16:36
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless


WS-C2960S-48TS-L   12.2(53)SE1

Try 12.2(53)SE2.
allen_huang Fri, 06/18/2010 - 21:41
User Badges:

Hi, leolaohoo,


Thanks for your response.

After upgrade one of them to 12.2(53)SE2.

Still the same situation.

CPU usage raised to 60% after boot completed.

It fell to 20% when I using telnet to login switch.

And raised back to 60% after logout.

Jayakrishna Mada Fri, 06/18/2010 - 21:43
User Badges:
  • Cisco Employee,

Allen,


Can you get "show proc cpu | e  0.00" and "show controller cpu-int" (couple of times) from the switch  when it is running high cpu.


Thanks.


JayaKrishna

Viral Bhutta Sat, 06/19/2010 - 18:30
User Badges:
  • Cisco Employee,

What happens when you console in to the switch.Does the cpu remain high? If yes then get show proc cpu | ex 0.00

nelson.garcia Sat, 06/19/2010 - 19:18
User Badges:

Just going to throw this out here but if all of his switches are displaying the same symptom, high CPU usage, and that they are all connected to the 3750 stack, perhaps he's running some sort of process, STP, routing protocols, that are misconfigured and causing the high CPU usage.

thomascollins Wed, 08/25/2010 - 05:22
User Badges:
  • Silver, 250 points or more

We were seeing the same problem on our 2960s, and CSCth24278 was the culprit.  Cisco says it's a cosmetic bug that doesn't affect switch operations, though I have my doubts (we're seeing some performance problems that I suspect are high CPU related).  No fix yet -- eta March 2011!

Leo Laohoo Wed, 08/25/2010 - 16:05
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

I'm currently running 12.5(55)SE on the 2960S.  I'm testing the same code on the 3750/3750E/3750X and so far so good.

thomascollins Thu, 08/26/2010 - 05:34
User Badges:
  • Silver, 250 points or more

Do you mean it fixed the high CPU?  Cisco TAC told me that high CPU (CSCth24278) was NOT fixed in 12.5(55)SE, but would be fixed in a release coming in 2011.

thomascollins Fri, 08/27/2010 - 07:57
User Badges:
  • Silver, 250 points or more

I know bug CSCth24278 is listed as cosmetic, but I think that may be wrong.  When we have high CPU, we are seeing packet loss on the switch.  We then connect a telnet session, CPU lowers, and packet loss stops.  Disconnect the telnet session, packet loss resumes.


We have a TAC case open documenting all of this.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normal tabell"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

We are experiencing the same problem and 12.5(55)SE does not fix the cpu bug.

thomascollins Tue, 08/31/2010 - 04:58
User Badges:
  • Silver, 250 points or more

Thanks for the info, that'll save me some work

NetworkKnight Fri, 10/15/2010 - 02:04
User Badges:

We are having the same Problem too.


#show processes cpu history



    2223222222228222222223222369666666666666669666666666666696
    8897787757889775899890865119130333135431413133253322443111
100                            *
90             *              *              *             *
80             *              *              *             *
70             *              *        *     *    *        *
60             *             *###############################
50             *             *###############################
40    *        *             *###############################
30 ************#*************################################
20 ##########################################################
10 ##########################################################
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%

    9999999999999999999999999999999999999999999999999999999999999999999999
    9998999899699999998889989889899899999899899999999998999999999999988798
100 **********************************************************************
90 **********************************************************************
80 **********************************************************************
70 **********************************************************************
60 *################*********#########################################***
50 ##################*******###########################################**
40 ##################*******###########################################**
30 ###################******###########################################**
20 ######################################################################
10 ######################################################################
   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
             0    5    0    5    0    5    0    5    0    5    0    5    0
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%

Where you can see the gaps i was logged in via ssh.


There are serious Performance Problems in our network.


I found some strange outputs:


If i enter show platform port-asic stats drop



Port-asic Port Drop Statistics - Summary
========================================
  Port  0 TxQueue Drop Stats: 0
  Port  1 TxQueue Drop Stats: 121840
  Port  2 TxQueue Drop Stats: 0
  Port  3 TxQueue Drop Stats: 239
  Port  4 TxQueue Drop Stats: 0
  Port  5 TxQueue Drop Stats: 8174
  Port  6 TxQueue Drop Stats: 17
  Port  7 TxQueue Drop Stats: 197598
  Port  8 TxQueue Drop Stats: 0
  Port  9 TxQueue Drop Stats: 0
  Port 10 TxQueue Drop Stats: 0
  Port 11 TxQueue Drop Stats: 0
  Port 12 TxQueue Drop Stats: 0
  Port 13 TxQueue Drop Stats: 0
  Port 14 TxQueue Drop Stats: 0
  Port 15 TxQueue Drop Stats: 16
  Port 16 TxQueue Drop Stats: 0
  Port 17 TxQueue Drop Stats: 16
  Port 18 TxQueue Drop Stats: 0
  Port 19 TxQueue Drop Stats: 679
  Port 20 TxQueue Drop Stats: 242
  Port 21 TxQueue Drop Stats: 0
  Port 22 TxQueue Drop Stats: 359
  Port 23 TxQueue Drop Stats: 0
  Port 24 TxQueue Drop Stats: 16
  Port 25 TxQueue Drop Stats: 0
  Port 26 TxQueue Drop Stats: 0
  Port 27 TxQueue Drop Stats: 0

I don't think, that this dropping is normal. I will continue trying to solve the Problem.


Greetings,

Benjamin

This is what one of our switches reports after we just ha

ve telented into it. The CPU drops to a normal level just after we telnet to it.



         11111555555555555555555555555555555555555555555555555
    6666622222000003333322222111112222211111444443333311111777
100                                                          
90                                                          
80                                                          
70                                                          
60                                                        ***
50           ************************************************
40           ************************************************
30           ************************************************
20           ************************************************
10 **********************************************************
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5   
               CPU% per second (last 60 seconds)
                                                             
    5555555555555555555555555555555555555555555555555555555555
    7667778587475557888685766865576765556757765565776566677465
100                                                          
90                                                          
80                                                          
70                                                          
60 ********** ******************************************** **
50 ##########################################################
40 ##########################################################
30 ##########################################################
20 ##########################################################
10 ##########################################################
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5   
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%
                                                                         
    5555655566655565555555555655655655565665565565656666556555556555556655
    9989199900099909988989998098099088909009919909090100990899990999980099
100                                                                      
90                                                                      
80                                                                      
70                                                                      
60 **********************************************************************
50 ######################################################################
40 ######################################################################
30 ######################################################################
20 ######################################################################
10 ######################################################################
   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
             0    5    0    5    0    5    0    5    0    5    0    5    0
                   CPU% per hour (last 72 hours)

Stefan Steenkamp Wed, 10/27/2010 - 06:52
User Badges:

I have the same problem


stack group as follow:


switch 1 provision ws-c2960s-48lps-l

switch 2 provision ws-c2960s-48ts-l

switch 3 provision ws-c2960s-48ts-l

switch 4 provision ws-c2960s-48ts-l

c2960s-universalk9-mz.122-53.SE2
when i ssh into the stack the CPU utilization drops and stay down for the duration of the session, once I log out the CPU go's up again.
ciscomagu Mon, 11/15/2010 - 02:25
User Badges:


Hi,


We have the same problem.


High CPU (80%) and packet loss. We then connect a telnet session, CPU lowers, and packet loss stops.  Disconnect the telnet session, packet loss resumes.


Any news about software release 12.2.58 ??



/Magnus

thomascollins Mon, 11/15/2010 - 07:44
User Badges:
  • Silver, 250 points or more

Hey Magnus -- two questions...

What IOS are you currently at?

How long have your switches been up?


The high CPU bug is still waiting for 12.2(58).  But there's another bug CSCtg77276 which affects 12.2(53) after 6 weeks of uptime.  Although the public case notes on CSCtg77276 don't exactly mention it, my Cisco engineer informs me it could cause packet loss. Upgrading to 12.2(55) fixed our packet loss problem -- but the high CPU bug is still there.


Tom

ciscomagu Mon, 11/15/2010 - 09:24
User Badges:

Hi Tom,


Thanks for your response and information about the bug CSCtg77276.

Have you any information when 12.2(58) arrives?


The version is 12.2(53)SE2 and uptime is 5 weeks, 6 days, 8 hours, 29 minutes.



Best Regards


/Magnus

thomascollins Mon, 11/15/2010 - 09:27
User Badges:
  • Silver, 250 points or more

Unofficially from my TAC engineer I heard 12.2(58) will be early 2011.  But given your problem (packet loss) and your uptime, I would give 12.2(55) a try.  It won't fix your high CPU, but it may fix your packet loss.


Tom

Vishal Gupta Thu, 11/18/2010 - 01:40
User Badges:
  • Cisco Employee,

Hi Allan,


This is happening due to a cosmetic software defect, where you will observe the CPU load more than 50 - 60% on these switches when you do not access them however it immediately normalizes when you access it via console; telnet or SSH. This is purely cosmetic and does not hamper the network and services running on the device.


Please check the following Bug: CSCth24278 - High CPU when no Console/VTY activity, or more info about it.


Regards,


Vishal

allen_huang Thu, 11/18/2010 - 01:55
User Badges:

Hi, Vishal,


Thanks for your reply. But even this issue is cosmetic.

It still hard to explain to my boss.

Could you help to push RD to correct this issue ASAP?

I think it would be quite easy if it's not a problem.

Thanks.

Vishal Gupta Thu, 11/18/2010 - 02:03
User Badges:
  • Cisco Employee,

Hi Allan,


You can leave a PC connected on it via console, this will not let the CPU go high and even if you face the issue than it could happen due to some other triggers taking place in the network and not becuase of High CPU.


I hope this way you can give an explanation to your Boss.


Kindly note that the respective team is working on it and you will be notified as soon as some information goes public about it.


Regards,


Vishal

steven.peterson Thu, 02/10/2011 - 06:34
User Badges:

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

·         The model revision numbers seem to play a big part in this bug CSCth24278 from my results below:
/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}


          Switch WS-C2960S-48TS-L

·         Model revision number: A0 <

·         Motherboard revision number: A0

·         IOS Image - c2960s-universalk9-mz.122-55.SE1.bin

·         Result is 30% CPU << This is good compared to the below switches

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

       /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

·        Switch WS-C2960S-48TS-L

          Model revision number: B0 <

          Motherboard revision number: A0

   ·      IOS Image - c2960s-universalk9-mz.122-55.SE1.bin

          CPU usage result 60-70%


/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

·         Switch WS-C2960S-48TS-L

          Model revision number: B0 <

·         Motherboard revision number: B0 <

   ·      IOS Image - c2960-lanbasek9-mz.122-25.SEE2.bin <


·         CPU usage result is 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5%, 5% <


          Although older this image seems to be a work around: c2960-lanbasek9-mz.122-25.SEE2.bin for CPU usage (going backwards I know) across the range of Model revision numbers B0,C0,D0 etc



       Can someone confirm ETA for IOS release 12.2(58) ?


       Thankyou

leam_hall Fri, 11/19/2010 - 03:57
User Badges:

We are having a similar issue on a 3750 stack. Oddly, after looking at the queues and turning off console logging, things seem to have quieted down. This is not a "cosmetic bug", as far as I can tell; we were having throughput and backup failure errors during the elevated CPU times.


The commands to change the logging were:


  • no logging console
  • logging buffered 128000


These came from a Cisco doc "Troubleshooting High CPU Utilization".


Leam

atrin.saghebfar Wed, 03/09/2011 - 03:26
User Badges:

Dear Allen

did you find a way to solve this problem

i have this prblem with 15 2960s-TS too


IOS Version: c2960s-universalk9-tar.122-55.SE2


Thanks,

Atrin

Eric Olinger Mon, 04/11/2011 - 10:44
User Badges:

I am troubleshooting very similar issues with a client who has several stacks of 2960S deployed. Today I checked and the 12.2(58)SE release was available. We are loading the first stack tonight and wil see if it resolves the issues for us. I would agree - reported cosmetic - but users are reporting phones rebooting due to lack of network connectivity and varous PC issues. I'll post as soon as I can confirm with the users.

Leo Laohoo Mon, 04/11/2011 - 15:36
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

Loaded 12.2(58)SE into two stacks of 2960S and so-far-so-good.

thomascollins Mon, 04/11/2011 - 17:19
User Badges:
  • Silver, 250 points or more

Excellent, please keep us updated of any problems.  We'll be going to 58 soon.

Leo Laohoo Mon, 04/11/2011 - 17:53
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

Thanks to whoever gave me the ratings.


If anyone wants to upgrade their 3560E/3560X or 3750E/3750X then HOLD IT.


I've tried just "pumping" the IOS  (from 12.2(55)SE) to a 3750E stack and I nearly got a stroke.  The management connection to the switch STOPPED.  But the switch was still continuing to do it's job.  No link failure.  No packet drops.  Nothing.  So I'm going to do a few more test to find out what happened.

Eric Olinger Thu, 04/28/2011 - 11:17
User Badges:

So far so good with our site. Users feel that there is better performance. The first stack went fine. I appreciate the post about CSCto62631. We are going to upgrade the remainder of the campuses soon.


      333343333333333333333433333333333333433333333333333333333333333333333333
      753616444345534555364555569753442456044365656654364476543444554433475584
  100                                                                      
   90                                                                      
   80                                                                      
   70                                                                      
   60                                                                      
   50                      *                                               
   40 ** ***     **  *** * ********     ***   *******  *  ***     **     ***
   30 ######################################################################
   20 ######################################################################
   10 ######################################################################
     0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..
               0    5    0    5    0    5    0    5    0    5    0    5    0 
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%

atrin.saghebfar Mon, 04/11/2011 - 22:32
User Badges:

there is only one thing, today i read the Release Notes for this IOS and Cisco said that:


CSCth24278 (Catalyst 2960-S switches)
The CPU utilization on the switch remains high (50 to 60 percent) when the switch is not being
accessed by a telnet or a console session. When you telnet or console into the switch, the CPU
utilization goes down.
There is no workaround.



but in my test it was OK and no High CPU Usage

robert.siimon Tue, 04/26/2011 - 06:47
User Badges:
Hello,


it seems that cisco has pulled this software 12.2.58 from their site because there is a following very serious bug present CSCto62631

I found a document which states:


Cisco IOS Release 12.2(58)SE images for all platforms have been removed from Cisco.com because of
a severe defect, CSCto62631. The solution for the defect will be in Cisco IOS Release 12.2(58)SE1, to
be available the week of May 9, 2011.


Meanwhile all you who have upgraded to this particular version, can either test this bug in your environment or downgrade or implement

a workaround


regards,

Robert

Leo Laohoo Wed, 05/11/2011 - 02:10
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

Right.  12.2(58)SE1 has been released, as scheduled.

s.steenkamp Wed, 05/11/2011 - 05:55
User Badges:

Hi there,


I don't fully agree with Cisco that this is just a cosmetic bug.



Been getting STP trap alerts that just does not make any sense. After my tshoot and investigation I come to the following conclusion.



Stack details:

Switch Ports Model              SW Version            SW Image

------ ----- -----              ----------            ----------

*    1 52    WS-C2960S-48LPS-L  12.2(53)SE2           C2960S-UNIVERSALK9-M

     2 52    WS-C2960S-48TS-L   12.2(53)SE2           C2960S-UNIVERSALK9-M

     3 52    WS-C2960S-48TS-L   12.2(53)SE2           C2960S-UNIVERSALK9-M

     4 52    WS-C2960S-48TS-L   12.2(53)SE2           C2960S-UNIVERSALK9-M



My stack has 2 fibre link back to the two core switches for triangulations.

The fibre links terminate in separate stack members.

STP is running.




Have been getting the following message in the log.


002484: May 10 22:07:33.548: %XDR-6-XDRIPCNOTIFY: Message not sent to slot 4 because of IPC error timeout. Disabling linecard. (Expected during linecard OIR)









Some info on IPC and XDR.


PLATFORM _IPC Messages

This section contains the Inter-Process Communication (IPC) protocol messages. The IPC protocol handles communication between the stack master switch and stack member switches.


XDR Messages

This section contains eXternal Data Representation (XDR) messages.





When I compare the time stamp of the sh log message above to my monitoring I noticed the following.

80% high CPU utilization.






Now here's what I thing is happing,

At times when the CPU hit high it causes an IPC issue, that then prevents the BPDU's from being send between the stack members. STP thinks the link is down and fails over to the backup root. When the stack returns to normal then STP changes back.




What do you guys think...?

Leo Laohoo Wed, 05/11/2011 - 17:07
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless


Right.  12.2(58)SE1 has been released, as scheduled.

Release Notes for the Catalyst 3750, 3560, 2960-S, and 2960 Switches, Cisco IOS Release 12.2(58)SE1

Cisco IOS Release 12.2(58)SE1 and later does NOT support all the Catalyst 3750 and 3560 switches. The models listed below are NOT supported in this release. For ongoing maintenance rebuilds for these switches, use Cisco IOS Release 12.2(55)SE and later (SE1, SE2, and so on).

• WS-C3560-24TS

•WS-C3560-24PS

•WS-C3560-48PS

•WS-C3560-48TS

•WS-C3750-24PS

•WS-C3750-24TS

•WS-C3750-48PS

•WS-C3750-48TS

•WS-3750G-24T

•WS-C3750G-12

•WS-C3750G-24TS

•WS-C3750G-16TD

kornalt130 Fri, 07/22/2011 - 05:20
User Badges:

Just to inform you guys, we have this problem on 3 2960s too (we don't have more yet).

They all had the c2960s-universalk9-mz.122-53.se2 image.

I've updated one to the c2960s-universalk9-tar.122-58.SE1 image and the problem is solved. This one is just one access switch. The other two are stacked but show exactly the same behaviour. I will update them when it is possible.


We have never expirienced any packet drop or loss or something else. So it seems to be just a cosmetic bug. But the stories above get me conserned so I desided to upgrade the switches.

guidobrinkmannl Tue, 08/02/2011 - 16:35
User Badges:

Hi Guys,

Also my first post:-)


We have the same problem on our stacks.

1 52    WS-C2960S-48FPD-L  12.2(55)SE2           C2960S-UNIVERSALK9-M

2 52    WS-C2960S-48FPD-L  12.2(55)SE2           C2960S-UNIVERSALK9-M

3 52    WS-C2960S-48FPD-L  12.2(55)SE2           C2960S-UNIVERSALK9-M

4 52    WS-C2960S-48FPD-L  12.2(55)SE2           C2960S-UNIVERSALK9-M


CSCth24278 is still not solved in 12.2(58)SE or SE1 (according to the release notes)

I could not download the release notes of SE2 (it presents SE instead)


I'll upgrade 2 of our 9 stacks this weekend to the new c2960s-universalk9-mz.150-1.SE.bin (posted on july 26, 2011)

No words of CSCth24278 in this document!


lets see and wait!

Guido


Message was edited by: Guido Brinkman [typos]

guidobrinkmannl Sun, 08/07/2011 - 15:32
User Badges:

I upgraded 3 uninstalled stacks: util. went down!

We installed these 3, lets see if they also perform under load!

sunshuangbeijing Wed, 12/14/2011 - 17:16
User Badges:

I have 4 ws-c2960s-24ts-L too.

but only 2 C2960 exist this problem like you.(the IOS the same to you )

other 2 C2960 haven't this problem.the IOS is 12.2(55)SE3  UNIVERSALK9-M .

I think update to 12.2(55)SE3 can be solve this problem.

hope can help you

.

Actions

This Discussion

Related Content