cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
334
Views
0
Helpful
5
Replies

IDS performance with new rules

when i create new rules in the IDS, how many percent of the performance is altered. There are problems with the performance for the creation of new rules??

5 Replies 5

marcabal
Cisco Employee
Cisco Employee

There is no simple answer to this question.

It really depends on the signature(s) being added.

Most signatures will have no noticeable affect on performance.

But some signatures (like signatures analyzing web server return traffic) could have a very serious affect on sensor performance.

I would recommend running your sensor with the default set of signatures for a few days with the 993 signature enabled. The 993 signature will let you know when you start dropping packets because of performance issues. If no 993s are firing then add a signature or two and let run for a few days. If no 993s are firing then the sensor is still performing fine for your network and you can add more signatures. If the 993 signatures start firing then it is possible that the signatures you added are causing performance issues.

The signatures that Cisco releases should have no or minimal affect on your sensor performance.

Occasionally Cisco will release signatures with severity default of 0 or change the default severity of existing signatures to 0 in our signature updates.

Usually because we've determined that the signatures are only usable for a very small set of our customers and when turned on could have a noticeable affect on performance.

Why does signatures that are analyzing web server return traffice has serious affects on the sensor performance? Analyzing this traffic is sometimes the onliest method to find the real problems within the space of vulnerability probes.

When monitoring web traffic you have the web client request traffic and then the web server response traffic. The vast majority of our web signatures monitor the web client request traffic. We have only a few sigs that monitor the web server response traffic. Those few sigs are turned off by default since they affect sensor performance, and are only indicative of links in a web page that could theoretically be used to attack the server. If the links are used then we have other signatures that detect the client request traffic for them.

The performance hit is seen for signatures looking for regular expressions in the web server response traffic. The reason for the performance hit is the amount of data coming back from the web server. Web requests are typically only a small percentage of web traffic. It is the web server response traffic that makes up the majority of the traffic. Web server responses are usually huge files including jpegs, avis, mpgs, etc.. If you have a web signature looking for a regular expression in the response then it is having to search through all of the data being returned by the web server.

Our sensor has a specific engine for analyzing web requests, but the standard STRING.TCP engine has to be used for analyzing we responses. The STRING.TCP engine is not aware of the specific formats of web responses so the code has not been optimized for analyzing web responses.

We do not have a specific engine for analyzing web responses. The attackers are generally detectable from the web request traffic which we do have a specific engine for.

In addition to Marco's comments above I would add that the perfomance hit of adding a new rule (in 3.X) depends entirely on the engine in which you add the signature. If you were to add 25 signatures for instance to the Web analysis engine and the traffic to be analyzed is to the Service then there will be no noticeable change in performance. If you were to implment the same 25 signatures, but chose to implement them in the more generic TCP.STRING engine there could be a noticeable change in the sensors performance. The TCP.STRINg engine does not utilize our most advanced Regular Expression engine at the moment and therefore signatures analyzed in this engine can cause a linera degradation in performance.

We are smoothing these differences out in the 4.0 code so that all of our engines that perform regular expression matching will be using the more advanced performance enhanced engine.

Mmmh, is there a way to tell the engine to check only the first 500 Bytes ? The rest is most of the time not interesting.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: