cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
506
Views
0
Helpful
3
Replies

High Interrupt CPU load on 7204 (NPE-G2) with only 96kpps

gugus
Level 1
Level 1

Rapid Explanation:

I have 96% of CPU Interrupt utilization with only 96kpps.

Need I replace my routers with more powerful router ?

Long Explanation:

My test bench is compose by 2 routers (c7200p-advsecurityk9-mz.124-15.T4.bin) connected together with ATM PA-A6-OC3-MM.

I'm using a Smartbits tester connected by 3 Fastethernet to each routeur. I send a total of 96kpps (trame size: 312 bytes) over its 3 Fastethernet interface connected to the first router, and receiving them on its other 3 Fastethernet connected to the second router.

With a light configuration (only one static route and interface IP addressing), I have this result:

Result for 96kpps (half in on direction, and half in the other direction):

Total CPU Utilization: 41%

Process Utilization: 1%

Interrupt Utilization: 40%

I have already 40% of Interrupt Utilization.

Now I use my complete configuration file: service-policy input on 2 Fastethernet, and service-policy output on ATM, OSPF for routing between, all interface Policy and CEF switched.

Result for 96kpps:

Total CPU Utilization: 94%

Process Utilization: 0%

Interrupt Utilization: 94%

=> Great, all packets are CEF switched (service policy managed by hardware) and no packet dropped

But, I'm using a 7204 with NPE-G2 that theoretically support 2Mpps…

How to explain that I'm already obtain 94% of Interrupt Utilization with only 96kpps ?

Thanks

1 Accepted Solution

Accepted Solutions

i guess u have lot of service poilices working no your router. Remember, the source based routing is one of the major cpu/memory hogs. Try to eliminate some of the policies with tradiontal traffic engineering and paste the results.

View solution in original post

3 Replies 3

rene.avi
Level 1
Level 1

Hi Gugus,

from what I've learned on the c-nsp mailing list the CPU stats are not linear with the load. This meets with my results here.

Taking in bidirectional packetflows:

100kpps = 49%

200kpps = 60%

400kpps = 78%

600kpps = 97%

Cheers, /Rene

i guess u have lot of service poilices working no your router. Remember, the source based routing is one of the major cpu/memory hogs. Try to eliminate some of the policies with tradiontal traffic engineering and paste the results.

I've use the Smartbits tester for other performance test on the NPE-G2 for trying to found what excatlly cause this low perf.

And here are the resuts for the first test on a single Cisco 7204 using the two Gigabit Ethernet port.

The smartbits send a bidirectionnal flow of 2.8Mpps (UDP packet of 64bytes size) across the router.

With the default configuration, the router can forward at 1,72Mpps... (CPU was at 99%)

Now I test the service-policy impact by sending a unidirectionnal flow (UDP packet of 64bytes size, dscp 0).

- Light service policy output (match dscp cs6 and bandwitdh percent 100 on this matched traffic). The generated traffic doesn't match

: forwanding rate is 940Kpps

- My customized Service policy output. The generated traffic doesn't match : forwanding rate is 413Kpps

- Light service policy input (extended ACL witch match dscp cs6 and remark dscp to af32). The generated traffic doesn't match: forwanding rate is 750Kpps

- Light Service policy input light (standard ACL standard which match non existant host and remank dscp to af32). The generated traffic doesn't match: forwanding rate is 788Kpps

- Light service policy input (standard ACL standard which match any and remank dscp to af32). The generated traffic match, forwanding rate is 740 Kpps

- My light Service policy input. The generated traffic doesn't match, forwanding rate is 780Kpps

- My light Service policy input AND my customized output: forwanding rate is 325Kpps.

Impact of using Service policy:

=> The router performance is half with a single light service policy... Enabling service policy have a big impact!

With my customized configuration, I reduce the performance with a factor of 5.

This mean that with my configuration, I can manage about 160kpps in production (for limiting my CPU to reach 70%).

...But 160kpps mean about 400Mb/s Ethernet (340 bytes size packet) and I want to use only an OC3 ATM interface, then it should be OK.

I continue my tests and use now two routers 7204 connected by ATM OC3 (155Mb/s ATM).

Performance drop to 96kpps (about 280 Mb/s ATM bidirectionnal)... The use of an ATM interface consume a lot of ressource to the NPE-G2.

For having an useable throughtput on the field, with my ATM and QoS configuration, I need to use only 70Kpps (CPU at 70%).

If I remove my service policy output and input then I can use it at 101kpps (about 297Mb/s ATM bidirectionnal) this is the maximum of this link with a CPU at 70%.

These tests permit me to found that the NPE-G2 is still too light for using "normal" QoS service-policy on an ATM OC3 interface.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: