We have an Cisco 7206VXR NPE-G2 at the network border. While routing 500 Mbit/s outgoing + 500 Mbit/s of incoming traffic from ISP it's getting 82% CPU loaded.
We're not using ACL, QoS and other features that can potencially slow-down the router.
How can I solve this problem?
There are statistics in post attachment.
Hi, you are routing almost 1Gb/s of traffic, CPU utilization is normal for that volume. If you don't need features like only a router have, it would be better to use a L3 switch, like a 3750 for this task.
But Cisco 7301 router loads about 37% on this traffic. We decided to use 7206 because of it have more PA-slot and NPE that routes 2 Mpps instead of 7301, which routes about 1 Mpps (routerperformance.pdf by Cisco).
...Also I'm using 2 BGP full-view on the router, so I need "router" features.
I do agree. I did performance tests on NPE-G1 and these results are similar to what I had got.
I made tests on NPE-G1 7206 VXR and my results where of high cpu rates in the process of packet switching between two GE ports of the NPE itself.
I did tests for IP packets switching and MPLS packets switching without great differences.
NPE-G2 is more powerful than NPE G1.
What is the size of packets you use ?
with 64 bytes frames and considering interframe gap you need 1.4 Mpackets per sec to fill a gigabit ethernet.
In addition if your traffic is bidirectional it will count twice. port A to port B and port B to port A.
Does anybody knows, why in the "Router performance" document from Cisco not available info about process switched packets of 7200-NPE-G2 router (but 7200 presents).
Avoid using anything plugged into the actual WIC style slots on the chassis other than the NPE. I noted that the NPE routes does mostly hardware processing, the other cards plugged into the front of the 7206vxr chassis will use software processing and eat up your CPU.
Moral of the story: just because you put a "gig" card into a chassis doesn't mean that it can route the full gig without running out of CPU. Try moving off everything to the NPE if possible and see how much your CPU falls.
Through outside NPE GE interface moving about 200+150 Mbits/s.
Unfortunately, I can't move this traffic to NPE-based interfaces.
What's a reason of process routing instead of hardware, when using non-NPE based interfaces? Is there official documents?
There are "sh int switching" and "sh cef not-cef-switched" in the attachment.