I'm currently experiencing a performance degrade, when forwarding CIFS/SMB traffic through the fwsm. Basically, I'm running a multi-customer setup, where each customer has a dedicated dmz for their own serverressources (citrix, appservers etc. etc.). In addition to that, I have a NetAPP FAS3170 storagedevice located on the inside-network.
My serverteam has conducted a few tests, using SIO as performanceload tool and using various block-sizes. When forwarding SMB-traffic between the a server and the NetApp, and bypassing the firewall, we get the following results:
Blocksize Forwarding-rate, Mbps
When performing the exact same loadtest, only this time forwarding the traffic thorugh the Fwsm, we get the following results:
Blocksize Forwarding-rate, Mbps
The numbers above are an average based on a series of tests.
So my question is; Are their any known performanceissues concerning SMB/CIFS traffic through the Fwsm? We've ruled the usual shortcomings of SMB, e.g. across large distances, since this is within the same datacenter/infrastructure. I had a similar problem with NFS traffic a few months ago, which resulted in a redesign, where we bypassed the firewall for NFS-traffic. However, that is not an option here, as it would compromise the security of our multi-customer design.
Like any other fw-platform, the fwsm has its limitations, but this does not seem to be the case here. When we're seing slow SMB-performance, there's nothing to suggest that the fwsm is overloaded.
I know, that the information provided above isn't much, but I'm slightly in the dark here, so if anyone has any past experience with a similar problem or merely some good suggestions, I'm all ears.
We did however, find the source of the problem. A trace revealed, that the client requests from the citrix-servers to the SMB-server, experienced a lot of timeouts, something called pseudo-deadlocks. The trace clearly showed, that the client was waiting for the smb-server to reply.
So for now, we've acquitted the firewall, since nothing indicates a problem there.
As to your question. Yes, we do see some out-of-order packets, but from what I know, this is a known issues, when the fwsm loadbalances the packets in a given flow between the two processors.
A team I was working with had a similar issue a while back. The "fix" was to modify the server buffer size.
Make a registry alteration of the file server - detailed in the following technote..
On the file server we set the registry key to it's maximum (65535) in side by side tests we saw a considerable difference. Guess you need to consider the risks within you environment prior to doing this though.
Table of ContentsIntroductionVersion HistoryPossible Future
UpdatesDocuments PurposeNAT Operation in ASA 8.3+ SectionsRule Types
Network Object NATTwice NAT / Manual NATRule Types used per SectionNAT
Types used with Twice NAT / Manual NAT and Network Obje...
Table of Contents Introduction:This document describes details on how
NAT-T works. Background: ESP encrypts all critical information,
encapsulating the entire inner TCP/UDP datagram within an ESP header.
ESP is an IP protocol in the same sense that TCP an...