Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
New Member

WS-C2960S-48FPS-L is having high CPU issues. Others do not. The difference? An ASA5520-pair

I have a situation that I cannot seem to figure out. I have several locations with WS-C2960S-48FPS-L switch in stacks of 2, 3, or 4 and they are all monitored by WhatsUp Gold and Cisco Prime LMS. None of these stacks have any issues except at one location. The uniqueness is that the location in question is behind an ASA5520 firewall (no VPN involved). They look like this:

Here is some output of the "DR" switch above.

2960-48-DR-SW1#sh proc cpu hist
                                                              
    5555555555555555555555555555556666655555555555555555555555
    5555555555666669999977777888884444488888666666666677777555
100                                                           
 90                                                           
 80                                                           
 70                                                           
 60 **********************************************************
 50 **********************************************************
 40 **********************************************************
 30 **********************************************************
 20 **********************************************************
 10 **********************************************************
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5    
               CPU% per second (last 60 seconds)
                                                              
    6667666666666666986666666696999666668666666666686666666666
    8731321430254131894611643392677424337357245533243021224313
100                 *         * ***                           
 90                 **        * ***     *                     
 80                 **        * ***     *          *          
 70 ** *       *    #* *  *   * ***     * **  **   *          
 60 ##########################################################
 50 ##########################################################
 40 ##########################################################
 30 ##########################################################
 20 ##########################################################
 10 ##########################################################
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5    
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%
                                                                          
    9989999999999999999999999779669999999999999999998779999999999999999999
    9932925284328322823185428939866743482228344852383549434756387227221924
100 **  * * *   *   *   **  *  *  **   *   *   **  *   *   *** **  *   *  
 90 ** **#########*####**##**  *  *################*   *##################
 80 *****##################*** *  #################*** ###################
 70 *****##################*******##################***###################
 60 ###*########################**########################################
 50 ######################################################################
 40 ######################################################################
 30 ######################################################################
 20 ######################################################################
 10 ######################################################################
   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
             0    5    0    5    0    5    0    5    0    5    0    5    0 
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%

 

Here is output from a stack of 4 switches that has more PoE devices than the "DR" switch above.

CHC-4Stack#sh proc cpu hist
                                                              
    2222222222222222221111122222222222222222222222222222222222
    6666666622222000009999900000111111111111111222221111177777
100                                                           
 90                                                           
 80                                                           
 70                                                           
 60                                                           
 50                                                           
 40                                                           
 30 ********                                             *****
 20 **********************************************************
 10 **********************************************************
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5    
               CPU% per second (last 60 seconds)
                                                              
    2222222233222222223922232222322222222222222223222222222222
    7778849700663477684889607899077577787779778878788777885667
100                    *                                      
 90                    *                                      
 80                    *                                      
 70                    *                                      
 60                    *                                      
 50                    *                                      
 40                    #                         *            
 30 ***** ******  *****#**************************************
 20 ##########################################################
 10 ##########################################################
   0....5....1....1....2....2....3....3....4....4....5....5....
             0    5    0    5    0    5    0    5    0    5    
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%
                               1                       1                  
    9439843935393849333933395430543944395339333933494630333933393349333933
    8769146871284018548855587740054772774748427865376590247664381218829893
100 *  *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *  
 90 *  *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *   *  
 80 *  **  *   * * *   *   *   *   *   *   *   *   *   *   *   *   *   *  
 70 *  **  *   * * *   *   *   *   *   *   *   *   * * *   *   *   *   *  
 60 *  **  *   * * *   *   **  *   *   *   *   *   * * *   *   *   *   *  
 50 ** **  * * * * *   *   *** *** **  **  *   *   *** *   *   *   *   *  
 40 ********** * **** ******** *** ******* *  **********  ***  *  *** *** 
 30 **********************************************************************
 20 ######################################################################
 10 ######################################################################
   0....5....1....1....2....2....3....3....4....4....5....5....6....6....7.
             0    5    0    5    0    5    0    5    0    5    0    5    0 
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%

Comparing the two, there is no comparison. Clearly, the "DR" switch, which is only a 2 stack, has a problem.

Look at the two when comparing "proc cpu sorted 5sec | ex 0.00".

2960-48-DR-SW1#sh proc cpu sort 5s | e 0.00
CPU utilization for five seconds: 61%/28%; one minute: 57%; five minutes: 57%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process 
 139  2269252629 721670851       3144 18.29% 18.36% 18.26%   0 Hulc LED Process 
   4   327116288  18031534      18141  4.69%  1.12%  0.89%   0 Check heaps      
 108   346953243  76351182       4544  1.09%  1.05%  1.08%   0 hpm counter proc 
  50   3181572811456816575        218  1.09%  1.21%  1.28%   0 Net Input        
 303        1443       407       3545  0.59%  0.07%  0.01%   1 SSH Process      
  69    76624097 310442931        246  0.29%  0.19%  0.17%   0 RedEarth Tx Mana 
  70    697607611441698330         48  0.19%  0.21%  0.19%   0 RedEarth Rx Mana 
 193    29262422  85412778        342  0.19%  0.14%  0.11%   0 Spanning Tree    
 148    88162105   7372892      11957  0.19%  0.23%  0.21%   0 HQM Stack Proces 
  33    11525088   8120464       1419  0.19%  0.05%  0.03%   0 Net Background   
 104    57606542 438480127        131  0.19%  0.14%  0.16%   0 hpm main process 
 149    36876809  29452028       1252  0.19%  0.10%  0.10%   0 HRPC qos request 
 178    31246094 167426336        186  0.09%  0.06%  0.05%   0 IP Input         
  88    171549081012791827         16  0.09%  0.08%  0.09%   0 HLFM address lea 
 250    13475194  50244429        268  0.09%  0.04%  0.01%   0 Inline Power     
  64    102161251348759559          7  0.09%  0.08%  0.09%   0 Draught link sta 
  90    106552591013952349         10  0.09%  0.04%  0.03%   0 HLFM address ret 
 109    16344887  79394681        205  0.09%  0.04%  0.01%   0 HRPC pm-counters 

 

CHC-4Stack#sh proc cpu sort 5s | e 0.00
CPU utilization for five seconds: 22%/0%; one minute: 23%; five minutes: 22%
 PID Runtime(ms)   Invoked      uSecs   5Sec   1Min   5Min TTY Process 
 108   165913483  41823591       3966  1.09%  0.82%  0.79%   0 hpm counter proc 
 148    45832286   4031977      11367  0.29%  0.21%  0.20%   0 HQM Stack Proces 
 104    33830260 282712188        119  0.29%  0.12%  0.11%   0 hpm main process 
 149    37066785  28884423       1283  0.29%  0.27%  0.23%   0 HRPC qos request 
  77    15564720  65900434        236  0.09%  0.09%  0.07%   0 hrpc <- response 
 251    15570226  44680134        348  0.09%  0.09%  0.07%   0 Marvell wk-a Pow 
  70    33525670 825274785         40  0.09%  0.11%  0.10%   0 RedEarth Rx Mana 
 193    36377860 111170316        327  0.09%  0.16%  0.14%   0 Spanning Tree    

 

I've cut back on snmp polls and limited it as much as I can but it doesn't seem to make a difference. In fact, when I shut down WUG, the CPU still runs at about 50-60%. I read a blog where someone mentioned the rapid LED flashing when their switches were plugged into a firewall so I'm wondering if there is something I need to tweak on the firewall.

The switch interface that attach to the firewall are "access" ports with a vlan configured. But, I don't have a vlan configured on the attached physical firewall ports. Could that be part of the problem?

Everyone's tags (1)
8 REPLIES
Hall of Fame Super Gold

What IOS version are you

What IOS version are you using?

New Member

Both switches (all switches)

Both switches (all switches) are running 12.2(55)SE5.

And...

BOOTLDR: C2960S Boot Loader (C2960S-HBOOT-M) Version 12.2(55r)SE, RELEASE SOFTWARE (fc1)

I've started looking at the ASA5520s. Their interfaces, G0/1, are switchports but not on VLAN13 which is the 192.168.10.0/24 (drawing incorrectly shows /2... my bad). That must mean that all traffic to/from the ASA(s) is using vlan1 and that might be the problem.

Anyone? Feel free to bust my chops if I should know better.

 

Hall of Fame Super Gold

Upgrade the switch to 12.2(55

Upgrade the switch to 12.2(55)SE9 and see if it's better.

Silver

Re: Upgrade the switch to 12.2(55

Similar issue here, without firewall and it's only a single switch, no stack.

 

Here some information:

Switch Ports Model              SW Version            SW Image                 
------ ----- -----              ----------            ----------               
*    1 52    WS-C2960S-48FPS-L  15.0(2)SE8            C2960S-UNIVERSALK9-M     


Configuration register is 0xF

2948P-1008-1#sh proc cpu hist
                                                                  
                                                                  
                                                                  
      666666666666666666666666666666666666666666666667777766666666
      664444455555888886666644444777776666655555555554444444444555
  100                                                           
   90                                                           
   80                                                           
   70 **     ***************     *************************     *
   60 **********************************************************
   50 **********************************************************
   40 **********************************************************
   30 **********************************************************
   20 **********************************************************
   10 **********************************************************
     0....5....1....1....2....2....3....3....4....4....5....5....6
               0    5    0    5    0    5    0    5    0    5    0
               CPU% per second (last 60 seconds)

                                                                  
                                                                  
                                                                  
      799999999999999999999999999999999999999999999999999999999999
      443456554434355535555553344344349945445444454474444444434355
  100     ****     *** ******         ** *  *    *  *           
   90  #########################################################
   80  #########################################################
   70 ##########################################################
   60 ##########################################################
   50 ##########################################################
   40 ##########################################################
   30 ##########################################################
   20 ##########################################################
   10 ##########################################################
     0....5....1....1....2....2....3....3....4....4....5....5....6
               0    5    0    5    0    5    0    5    0    5    0
               CPU% per minute (last 60 minutes)
              * = maximum CPU%   # = average CPU%

                                                                              
                                                                              
                                                                              
      999999999999999999999999996669999999999999999999999999999999999999999999
      897788899886667899978997861867987888988898878987977879887666766668977667
  100 **************************   *****************************************
   90 #########################*   *########################################
   80 #########################*   *########################################
   70 #########################* **#########################################
   60 ##########################***#########################################
   50 ##########################***#########################################
   40 ##########################***#########################################
   30 ######################################################################
   20 ######################################################################
   10 ######################################################################
     0....5....1....1....2....2....3....3....4....4....5....5....6....6....7..
               0    5    0    5    0    5    0    5    0    5    0    5    0  
                   CPU% per hour (last 72 hours)
                  * = maximum CPU%   # = average CPU%


2948P-1008-1#sh proc cpu | ex 0.00
CPU utilization for five seconds: 67%/31%; one minute: 69%; five minutes: 81%
 PID Runtime(ms)     Invoked      uSecs   5Sec   1Min   5Min TTY Process 
  44    10754775    34722018        309  0.09%  0.05%  0.05%   0 Net Background   
  80    43859413   958721564         45  0.09%  0.07%  0.04%   0 Draught link sta 
  85    16468915   326452772         50  0.09%  0.10%  0.09%   0 RedEarth Tx Mana 
 104    82508415   723901409        113  0.09%  0.10%  0.08%   0 HLFM address lea 
 106    36392098   735204818         49  0.09%  0.05%  0.04%   0 HLFM address ret 
 126   174048026    28906661       6021  0.69%  0.74%  0.74%   0 hpm counter proc 
 136        1669         246       6784  0.09%  0.10%  0.19%   1 SSH Process      
 157  3082136199   720599650       4277 27.88% 27.92% 27.88%   0 Hulc LED Process 
 168    62958649     5763920      10922  0.19%  0.21%  0.20%   0 HQM Stack Proces 
 169     8940471    11527820        775  0.09%  0.05%  0.05%   0 HRPC qos request 
 188    20516575    20012835       1025  0.39%  0.07%  0.06%   0 CDP Protocol     
 214   244636879   875879413        279  1.49%  1.13%  1.25%   0 Spanning Tree    
 230    18323527    28806901        636  0.29%  0.26%  0.29%   0 PI MATM Aging Pr 

Any ideas?

 

VIP Purple

Re: Upgrade the switch to 12.2(55

Hi
Your hitting the bug too HULC , you need to got to an image where the HULC bug is not active , the release notes for each version specifies the known caveats or try the recommended start version

Process
157 3082136199 720599650 4277 27.88% 27.92% 27.88% 0 Hulc LED
Silver

Re: Upgrade the switch to 12.2(55

The HULC is high, but should be normal for this 48 port switch. What I do miss though, where are the other ~70% of the cpu being used up.
Also, the HULC should be solved in nearly all 15.0(2) releases, which I'm already running.
VIP Purple

Re: Upgrade the switch to 12.2(55

You take away the HULC its running at normal cpu below 40 , that HULC bug is most versions between 2 and 3 series , have you added all those remain processes up whats the count it should be about 40 ish , you could run the show tech through the cli analyser on cisco website make sure theres no underlying hardware issue , a standard 2960 should generally run between 10 - 50% cpu , you can get the HULC to drop by shutting any un used ports

Hulc LED Process uses 6-23% CPU on Catalyst 2960/2960S/2960X switch

CSCtg86211



Description


Symptom:
Hulc LED Process uses 6-23% CPU on Catalyst 2960/2960S/2960X 24 or 48-port switch.

Conditions:
The CPU utilization for Hulc LED Process will be in the 6-23% range for the
Catalyst 2960 series like 2960, 2960S, 2960X switch models.

For WS-C2960X-48LPS-L which is 48ports PoE 2960X switch Hulc LED Process
Could reach about 22%-23%.

The is seen in 12.2(50)SE03 or later releases.

Workaround:
This is an expected behavior and there is no workaround.
Silver

Re: Upgrade the switch to 12.2(55

I'm using now SNMP to graph the CPU load and, oh wonder, it's now at around 40% instead of the 80-90% in the CLI.


279
Views
0
Helpful
8
Replies
CreatePlease to create content