I'm testing throughput on an active/passive FWSM deployment prior to putting it into production and I'm getting confusing results. Here they are:
* When I test Gig-to-Gig throughput between two of our distribution blocks (6500's w/o FWSM) I get around 550Mbps.
* When I test Gig-to-Gig throughput between one of our existing distribution blocks (6500 w/o FWSMs) and the new distribution block (6500 w/ FWSMs) I can't get better than 300Mbps.
I've tried the test to a VLAN on the new distribution block that doesn't go "through" the FWSM and I get 550Mbps. This is, unfortunately, really looking like the FWSM restricts throughput.
Some specifics on the testing:
* I'm using a CLI tool Iperf
* I've tried different packet sizes (64k - 512k) all with an appreciable differences between disti blocks w/ FWSM and w/o
* There is no production traffic to speak of -- so no contention on the devices
* I've done testing after-hours and during production (for the other, in-use disti blocks)
* All distribution blocks are dual Gig-connected, L3 routed to the core
* The FWSM is configured in routed-mode
I can't find any architecture difference between the distribution layers that would account for the difference; it just looks like the FWSM can't push a single connection above 350Mbps on throughput.
Thoughts? Am I missing something? Has anyone else done these kinds of tests?
Turns out the issue was due to a known bug (CSCsj56795). Apparently the FWSM can experience an issue with the reordering of packets that it processes. The IOS version we're running has resolved this bug, however you must issue a specific command in order to correct the behavior ("sysopt np completion unit"). There was no reason given for why this command wouldn't be part of the default configuration.
After issuing this command and retesting throughput there were no discrepancies between our distribution blocks. Looks like this resolved our problem.
Thank you very much for this post. I had exactly the same problem with FWSM 4.0.6 (tcp throughput from FWSM to 6500 limited to 300Mbit/s) and the command "sysopt np completion-unit" resolved it.
Dear Mr Cisco, why isn't this command enabled by default?
As per bug details, this should be fixed on 4.0(1), but apparently you are still having issues with 4.0(6). We are planning to install FWSM with 4.0(11), would any of you having issues with this code as well? Thanks
I'm curious to know if the command 'sysopt np completion-unit' was enabled on the FWSM in a single firewall mode or if there were multiple contexts on the FWSM?
Have you found any issues since running the above command? Are you aware of any bugs related to using this command?
I understand the benefits of using the feature, but I'm concerned if any adverse impacts were caused as a result of applying the feature.
We've never seen any problems related to entering this command, although the customer is not stressing the firewalls with throughput. I would check to see if this default behavior was updated in the newest release of FSWM code. It was a single instance and I'm not aware of the effect or requirement for multiple contexts. Thanks,
As already mentioned, there are no adverse effects of using the command in signle or multi-context mode.
I hope it clarifies it a little.
Thanks for the response.
Any idea or reason why this feature is not enabled by default?
If this is a true enhancement I'm curious as to why one should have to manually enable it.
It is not enabled because it is a fix for a defect we saw (FWSM reordering TCP packets). IT is not needed by everyone as very little percentage of users see it and it was not in place as versions came out.
But I don't expect enabling it to cause extra issues.
there is a very good document on FWSM performance. it includes the topic about np completion, as well as others that you can tweak to improve single flow performance. also includes other topics to take into consideration etc.: