cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1711
Views
0
Helpful
6
Replies

Cisco-6509, High Cpu utilization.

sumitsept3
Level 1
Level 1

Hi,

We are experiencing very high cpu utilization on our 6509 switch, is there any possibility we can fix the issue ?

sh proc cpu

CPU utilization for five seconds: 77%; one minute: 75%; five minutes: 60%

PID 5Sec 1Min 5Min Process

1 0.4% 4.4% 4.3% kernel

3 0.0% 0.0% 0.0% qdelogger

4 0.0% 0.0% 0.0% devc-pty

5 0.0% 0.0% 0.0% devc-mistral.proc

6 0.0% 0.0% 0.0% pipe

7 0.0% 0.0% 0.0% dumper.proc

4104 0.0% 0.0% 0.0% pcmcia_driver.proc

4105 0.0% 0.0% 0.0% bflash_driver.proc

12298 0.0% 0.0% 0.0% mqueue

12299 0.0% 0.0% 0.0% flashfs_hes.proc

12300 0.0% 0.0% 0.0% dfs_bootdisk.proc

12301 0.0% 0.0% 0.0% ldcache.proc

12302 0.0% 0.0% 0.0% watchdog.proc

12303 0.0% 0.0% 0.0% syslogd.proc

12304 0.0% 0.0% 0.0% name_svr.proc

12305 0.0% 0.0% 0.0% wdsysmon.proc

12306 0.0% 0.0% 0.0% sysmgr.proc

16386 0.0% 0.0% 0.0% chkptd.proc

16403 0.0% 0.0% 0.0% sysmgr.proc

16404 0.0% 0.0% 0.0% syslog_dev.proc

16405 0.0% 0.0% 0.0% itrace_exec.proc

PID 5Sec 1Min 5Min Process

16406 0.0% 0.0% 0.0% packet.proc

16407 0.0% 0.0% 0.0% installer.proc

16408 45.0% 42.0% 33.8% ios-base

16409 0.0% 0.0% 0.0% fh_fd_oir.proc

16410 0.0% 0.0% 0.0% fh_metric_dir.proc

16411 0.0% 0.0% 0.0% fh_fd_snmp.proc

16412 0.0% 0.0% 0.0% fh_fd_none.proc

16413 0.0% 0.0% 0.0% fh_fd_intf.proc

16414 0.0% 0.0% 0.0% fh_fd_gold.proc

16415 0.0% 0.0% 0.0% fh_fd_timer.proc

16416 0.0% 0.0% 0.0% fh_fd_ioswd.proc

16417 0.0% 0.0% 0.0% fh_fd_counter.proc

16418 0.0% 0.0% 0.0% fh_fd_rf.proc

16419 0.0% 0.0% 0.0% fh_fd_cli.proc

16420 0.0% 0.0% 0.0% fh_server.proc

16421 0.0% 0.0% 0.0% fh_policy_dir.proc

16422 27.1% 25.2% 18.9% tcp.proc

16423 0.0% 0.0% 0.0% ipfs_daemon.proc

16424 0.4% 0.4% 0.4% raw_ip.proc

16425 0.0% 0.0% 0.0% inetd.proc

16426 2.2% 1.9% 1.4% udp.proc

16427 0.2% 0.4% 0.4% iprouting.iosproc

16428 0.4% 0.2% 0.2% cdp2.iosproc

PID 5Sec 1Min 5Min Process

765997 0.0% 0.0% 0.0% tftp_fs.proc

6 Replies 6

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Sumit,

may you post as an attachment file the following?

sh proc cpu sorted 1min

that allows to see the processes that use most resources

sh proc cpu history that provides whole cpu usage over time as histograms

and also allows to understand if there are recurring peaks in usage or it is steady high

Hope to help

Giuseppe

sh proc cpu

CPU utilization for five seconds: 86%; one minute: 88%; five minutes: 90%

PID 5Sec 1Min 5Min Process

1 0.0% 2.7% 3.8% kernel

3 0.0% 0.0% 0.0% qdelogger

4 0.0% 0.0% 0.0% devc-pty

5 0.0% 0.0% 0.0% devc-mistral.proc

6 0.0% 0.0% 0.0% pipe

7 0.0% 0.0% 0.0% dumper.proc

4104 0.0% 0.0% 0.0% pcmcia_driver.proc

4105 0.0% 0.0% 0.0% bflash_driver.proc

12298 0.0% 0.0% 0.0% mqueue

12299 0.0% 0.0% 0.0% flashfs_hes.proc

12300 0.0% 0.0% 0.0% dfs_bootdisk.proc

12301 0.0% 0.0% 0.0% ldcache.proc

12302 0.0% 0.0% 0.0% watchdog.proc

12303 0.0% 0.0% 0.0% syslogd.proc

12304 0.0% 0.0% 0.0% name_svr.proc

12305 0.0% 0.0% 0.0% wdsysmon.proc

12306 0.0% 0.0% 0.0% sysmgr.proc

16386 0.0% 0.0% 0.0% chkptd.proc

16403 0.0% 0.0% 0.0% sysmgr.proc

16404 0.0% 0.0% 0.0% syslog_dev.proc

16405 0.0% 0.0% 0.0% itrace_exec.proc

PID 5Sec 1Min 5Min Process

16406 0.0% 0.0% 0.0% packet.proc

16407 0.0% 0.0% 0.0% installer.proc

16408 48.1% 50.4% 51.8% ios-base

16409 0.0% 0.0% 0.0% fh_fd_oir.proc

16410 0.0% 0.0% 0.0% fh_metric_dir.proc

16411 0.0% 0.0% 0.0% fh_fd_snmp.proc

16412 0.0% 0.0% 0.0% fh_fd_none.proc

16413 0.0% 0.0% 0.0% fh_fd_intf.proc

16414 0.0% 0.0% 0.0% fh_fd_gold.proc

16415 0.0% 0.0% 0.0% fh_fd_timer.proc

16416 0.0% 0.0% 0.0% fh_fd_ioswd.proc

16417 0.0% 0.0% 0.0% fh_fd_counter.proc

16418 0.0% 0.0% 0.0% fh_fd_rf.proc

16419 0.0% 0.0% 0.0% fh_fd_cli.proc

16420 0.0% 0.0% 0.0% fh_server.proc

16421 0.0% 0.0% 0.0% fh_policy_dir.proc

16422 33.0% 31.0% 29.6% tcp.proc

16423 0.0% 0.0% 0.0% ipfs_daemon.proc

16424 0.2% 0.6% 0.5% raw_ip.proc

16425 0.0% 0.0% 0.0% inetd.proc

16426 2.5% 2.4% 2.5% udp.proc

16427 1.5% 0.5% 0.5% iprouting.iosproc

16428 0.4% 0.3% 0.2% cdp2.iosproc

show proc cpu his

8889899999998888899999988888888889889898889998888898888899

8891800102009859901110099998998885783709980008899919989800

100 *

90 **********************************************************

80 **********************************************************

70 **********************************************************

60 **********************************************************

50 **********************************************************

40 **********************************************************

30 **********************************************************

20 **********************************************************

10 **********************************************************

0....5....1....1....2....2....3....3....4....4....5....5....

0 5 0 5 0 5 0 5 0 5

CPU% per minute (last 60 minutes)

* = maximum CPU% # = average CPU%

Hello Sumit,

two processes are causing the high cpu usage:

CPU utilization for five seconds: 86%; one minute: 88%; five minutes: 90%

PID 5Sec 1Min 5Min Process

16408 48.1% 50.4% 51.8% ios-base

16422 33.0% 31.0% 29.6% tcp.proc

please provide IOS image name and type of cpu you may be hitting a bug or the device is under some form of attack with malformed TCP packets or it may be some misconfiguration.

Have you added / changed the configuration recently ?

There is a companion device colocated, if yes how does it behave ?

if you have a registered account with enough privileges you can try to use

https://www.cisco.com/pcgi-bin/Support/OutputInterpreter/home.pl

it will ask you to post

sh ver

sh proc cpu

This is the output interpreter feedback:

-----------------------------------------------

SHOW PROCESS CPU NOTIFICATIONS (if any)

INFO: Total CPU Utilization is comprised of process and interrupt percentages.

Total CPU Utilization: 86%

Process Utilization: %

Interrupt Utilization: %

These values are found on the first line of the output:

CPU utilization for five seconds: x%/y%; one minute: a%; five minutes: b%

Total CPU Utilization: x%

Process Utilization: (x - y)%

Interrupt Utilization: y%

Process Utilization is the difference between the Total and Interrupt (x and y).

The one and five minute utilizations are exponentially decayed averages (rather

than an arithmetic average), therefore recent values have more influence on the

calculated average.

ERROR: Total CPU Utilization is at 90% for the past 5 minutes, which is very high

(>90%).

NOTE: This is an exponentially decayed average rather than an arithmetic average,

therefore recent events have a greater effect than past events.

This can cause the following symptoms:

- Input queue drops

- Slow performance

- Slow response in Telnet or unable to Telnet to the router

- Slow response on the console

- Slow or no response to ping

- Router doesn't send routing updates

The following processes are causing excessive CPU usage:

PID CPU Time Process

16408 50.4% ios-base

16422 31.0% tcp.proc

TRY THIS: If there is no indication of any problem in logged messages, then the

problem could possibly be caused by a bug in the IOS. Using the Bug ToolKit, run

a search for the specified process to see if any bugs have been reported. If this

is not caused by a bug, this device may be overloaded. Investigate upgrading this

device or moving some CPU intensive tasks to a second device.

INFO: If you need help from a Customer Support Engineer in the Cisco TAC, capture

the 'show tech-support' command output (from enable mode) before contacting Cisco

TAC. Also, if the high CPU utilization is caused by a process, please capture the

'show stacks {pid}' command output (where pid is the process ID of the process

causing the high CPU utilization). If the problem is caused by a bug in IOS, please

relay the bug ID to the Cisco Customer Support Engineer handling the case.

NOTE:

- Make sure all debugging commands in your router are turned off by issuing the

undebug all or no debug all command.

Hope to help

Giuseppe

Hello Sumit,

I made a search for release 12.2(18)SXF16 with search key high cpu on bug toolkit

CSCek78237

High CPU on ATM PA Helper process on PA-A3-T3

Info Information on this bug

Fixed 2

CSCdz83100

Multicast pkts should not be policy routed in CEF

Info Information on this bug

Fixed 2

CSCsa51770

Configuration of RSPAN on 12.2(18)SXD3 causes high CPU.

Info Information on this bug

Fixed 3

CSCsv40527

High CPU in IP RIB when LI enabled

Info Information on this bug

Fixed 3

Hope to help

Giuseppe

Thanks for all your support , i am still trying to fix it at the earliest.

We are using

s72033-advipservicesk9_wan-vz.122-18.SXF6.bin

Hello Sumit,

all the bugs I've listed could apply to yours image.

if you like you can send a filtered version of your configuration.

Be aware that some problems come out from having two devices competing for something like to be HSRP active for the same group.

So if you have a pair of C6509 look at what happens at the other device.

I would suggest you to open a TAC service request if you hadn't made any configuration change on the device or in some neighboring device yesterday or the day before yesterday.

Hope to help

Giuseppe

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card