CIFS/SMB performance thorugh Fwsm

Unanswered Question
Mar 29th, 2010
User Badges:
  • Bronze, 100 points or more


I'm currently experiencing a performance degrade, when forwarding CIFS/SMB traffic through the fwsm. Basically, I'm running a multi-customer setup, where each customer has a dedicated dmz for their own serverressources (citrix, appservers etc. etc.). In addition to that, I have a NetAPP FAS3170 storagedevice located on the inside-network.

My serverteam has conducted a few tests, using SIO as performanceload tool and using various block-sizes. When forwarding SMB-traffic between the a server and the NetApp, and bypassing the firewall, we get the following results:

Blocksize     Forwarding-rate, Mbps

--------------     --------------------------------

     512B                    20,157

      4K                      88,567

     64K                    104,203

When performing the exact same loadtest, only this time forwarding the traffic thorugh the Fwsm, we get the following results:

Blocksize     Forwarding-rate, Mbps

--------------      --------------------------------

    512B                    11,427

      4K                      28,799

     64K                    36,204

The numbers above are an average based on a series of tests.

So my question is; Are their any known performanceissues concerning SMB/CIFS traffic through the Fwsm? We've ruled the usual shortcomings of SMB, e.g. across large distances, since this is within the same datacenter/infrastructure. I had a similar problem with NFS traffic a few months ago, which resulted in a redesign, where we bypassed the firewall for NFS-traffic. However, that is not an option here, as it would compromise the security of our multi-customer design.

Like any other fw-platform, the fwsm has its limitations, but this does not seem to be the case here. When we're seing slow SMB-performance, there's nothing to suggest that the fwsm is overloaded.

I know, that the information provided above isn't much, but I'm slightly in the dark here, so if anyone has any past experience with a similar problem or merely some good suggestions, I'm all ears.



  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
jan.nielsen Sun, 04/18/2010 - 14:38
User Badges:
  • Gold, 750 points or more

Do you see alot of packets out of order if you do a traffic sniff ?

UHansen1976 Mon, 04/19/2010 - 03:06
User Badges:
  • Bronze, 100 points or more

Hi Jan,

Thanks for writing.

We did however, find the source of the problem. A trace revealed, that the client requests from the citrix-servers to the SMB-server, experienced a lot of timeouts, something called pseudo-deadlocks. The trace clearly showed, that the client was waiting for the smb-server to reply.

So for now, we've acquitted the firewall, since nothing indicates a problem there.

As to your question. Yes, we do see some out-of-order packets, but from what I know, this is a known issues, when the fwsm loadbalances the packets in a given flow between the two processors.



A team I was working with had a similar issue a while back. The "fix" was to modify the server buffer size.

Make a registry alteration of the file server - detailed in the following technote..

On the file server we set the registry key to it's maximum (65535) in side by side tests we saw a considerable difference. Guess you need to consider the risks within you environment prior to doing this though.

Best Regards



This Discussion