I have Cisco6513 with one IDSM-2 slot.
I faced the following issue:
Suddenly traffic over the network became very very slow. I troubleshoot the issue so I found that IPS1 LOAD reached 100%. So via CLI, I did the IPS1 in BYPASS mode ON. Then traffic came back to their normal position and all the network was started working fine. This issue has occurred many times. (But we never faced this issue before E3 Signature update).
Then after some time I OFF the BYPASS mode to its previous mode. Now network is again working normal.
I want to identify the issue behind this IPS LOAD behavior.
All signatures are in default state.
IPS1 version Detail:
Cisco Intrusion Prevention System, Version 6.1(2)E3
Signature Update S440.0
Cisco6513 IOS: Version 12.2(18)SXF17
For such purposes I use MRTG to build graphs of load. I check CPU load, interfaces bits per second and packets per second load.
Your issue may be caused by a great number of packets without great data payload, e. g. DNS requests.
If you have a time you can check it.
CPU load is important, but with the dual processor systems, the CPU will typically see-saw with one CPU running at 100% and no operational problems. Watching Packet Loss is a better metric for sensor performance. This will usualy happen when both CPUs hit 100% and heavy interface usage, but seldom with only one CPU maxed.
You can script up a "show event stat past 1:00 | inc missed" to check your sensor's packet loss.
Yes I know CPU 100% utilization is normal but our sensor load became 100% and the traffic was excessively delayed to more than 2000 ms simultaneously in the network.
And this has happened 3 to 4 times after upgrading the signature engine to E3.
We have 4 IPS and we have faced this issue simultaneously.
I think it will be helpful to do the next:
1. Analyze events for some time period when CPU usage and traffic delay is normal.
2. Analyze events for the same time period when you have a problem with CPU and traffic delay.
See the difference.
When a sensor becomes overloaded it begines to miss inspecting packets. This causes problems with the TCP state tracking and causes the sensor work harder, holding onto unnecessary TCP sessions because it does not have accurate state information.
This is called the death spiral.
I see IDSM's begin to drop packets at around 300 to 350 Mb/s of traffic in promiscious mode.
Our IDSM is in Inline Mode:
Inline TCP Tracking Mode: Interface and VLAN
Core Switch IPS Etherchannel Setup:
Group 5: IDSM(A) and IDSM(B) Port x/7
Group 6: IDSM(A) and IDSM(B) Port x/8
Some VLAN Pair(s) are on interface x/7 and others are on x/8
There is an FWSM module also, which acts as the default gateway for all internal VLANs.
IDSM shows 'Duplicate Packets'
show statistics virtual-sensor | inc Dup
Duplicate Packets = 24357728
I have found the same 100% Load issue on the Cisco TAC case.
Has any one faced this issue before in their workaround. Actually we are not sure about our issue. Is our issue also related to SMTP traffic or not as in the TAC case. How can we identify our issue?
Please see the following TAC case:
IDSM is showing high CPU and a "processing load percentage" of 100 during certain periods daily. Traffic is affected at those times.
Issue has been identified to be linked to smtp traffic.
OIDs for SNMP GET to monitor CPUs load:
they are Cisco proprietary.
IDSM2 supports next MIBs:
available on site:
I use MRTG for CPUs load monitoring with this OIDs.