I've identified a performance hit of data traversing the CSS via a SAMBA mount. Using the same services, a content rule passing ssh (TCP/22) can achieve 80 Mbps (FE) throughput with an scp. From the same devices, I scp a file from the samba mount point to the localhost, and get 8 Mbps.
A second way to test it is to put the file on webserver and download it via http (~44 Mbps). Mount that http server directory as a mount point external to the CSS, then download via http from the local file/mount point and achieve ~8Mbps. (btw, I verified the correct SAMBA performance/operations on non-CSS traffic flows, where the 80Mbps rate is matched)
Traffic analysis doesn't show anything strange, other than the tag that the frames are 'short frames'. Postings claim this might be an interface/application issue, but I'm wondering if the SAMBA presentation layer isn't affecting how the framing is done somehow. Perhaps in the padding or compression that is affecting how the CSS interprets the frame? Has anyone experienced similar performance issues?
the CSS is not interpreting or inspecting SSH traffic.
It should normally just switch the traffic in hardware.
I would like personally to see the sniffer trace of the working and non-working download.
I also would like to see your content rule config and would like to know you software version.
We are running 7.40.0.04. The sniffer trace I can't export, due to company policy, but I could probably recreate it for you if there was a lab available ;).
Here's the relevant switch config
ip address 10.10.10.253
ip address 10.10.10.252
vip address 192.168.0.1
add service samba-server-eth0
add service samba-server-eth1
vip add 192.168.0.1
add service samba-server-eth0
add service samba-server-eth1
Let me explain my test scenario a little more clearly. Samba-sits on the internal side of the CSS, inside a data center. It exports a directory /rawdata that contains a file test.zip (approx. 400MB).
From an external linux-OS laptop with a FE connection, I first scp (via ssh tunnel) the file to see what possible speeds I can achieve (within the limits imposed by the encryption overhead. So:
scp email@example.com:/rawdata/test.file /tmp
and I get about 10.2 MBps (80ish Mbps).
next, I mount the samba exported drive on the laptop:
mount -t smbfs -o username-.,password=. //192.168.0.1:/rawdata /mnt/samba
Now, I copy the same file from the mount point to the localhost. This moves the file through the CSS matching the samba-server-samba content rule, but measures the transfer with the same metric and overhead imposed by the scp:
scp /mnt/samba/test.file root@localhost:/tmp
and I achieve 1.2 MBps (8ish Mbps).
Via the traffic dump, I verified the samba transfer is initiated by the laptop, thus hitting the content rule versus a group-match by an unsolicited outbound server flow. One noticable difference between the SSh and SAMBA transfers is the time delay between packets. While the SSH operates in the 0.0000x range, SAMBA is responding in the 0.000x range. Other than that, the SAMBA tranfer appears normal (tcp handshake, followed by file transfer and session information all on port 445). Looking at the same transfer through a capture on the laptop and the capture on server, all packets match (sequence and acknowledgement so no delayed binding or proxying is occurring). The only strange behavior is the designatin of the frames as a [short frame] in both captures (one by tcpdump/linux, the other by a nam blade in a 6509). Is it possible the CSS is identifying this 'short frame' also and handing the frame up for software processing rather than hardware? The SAMBA config isn't setting bufferlengths or other socket settings, so it should be defaulting to the NIC settings (which are also default).
I've also verified the same performance hits on a different CSS (same version) using a different path into the data center to a different samba server. I think today I'll reverse the path (share from the laptop, mount on the samba-server) to see what happens. Any suggestions on what to test? I will also try the large file transfer and debug the flow via llama to see if anything is noted there.
I believe the [short frame] is just because your sniffer tool is not capturing the full packet size.
So I don't think this is relevant to our issue.
all I could suggest is to upgrade your version to a more recent one.
And regarding the test, make sure to compare exactly the same transfer with and without the CSS.
I shared out the tmp directory on the external laptop as 'temp', containing the test.file file.
Then, from the internal server I copy that file down:
scp firstname.lastname@example.org:/tmp/test.file /tmp
and get the typical 80 Mbps
Next, I mount the directory:
mount -t smbfs -o username=.,password=. //188.8.131.52/temp /mnt/csbowse
and copy the file from the mount point to the localhost:
scp /mnt/csbowse/test.file root@localhost:/tmp
and get the typical 8 Mbps.
BUT HERE IS SOMETHING I FORGOT EARLIER. Once you pull down a file at the slow speed, if you re-enter the command to copy it down, you get the desired speed (80 Mbps). However, this seems to be a 'cached' copy from somewhere, because if you let it sit for a while (usually until someone copies another file, or you manually copy another file), then you get the slow speed transfer. I do not see this slow-then-fast behavior on samba transfers not going through the CSS.
We haven't manually configured any caching on the CSS, does it default to any caching activities?
I'm getting tired of withdrawing my statements. Using scp as the transfer mechanism because it gives such nice transfer rate metrics may also hide some problems. If I reverse the file transfer:
[previous] scp /mnt/samba/test.file root@locahost:/tmp
[reverse] scp /tmp/test.file root@localhost:/mnt/samba
an interesting thing happens. The scp 'runs' and looks like it completes in a timely fashion (~80 Mbps) then throws an error about permission denied. If you are looking at the actual traffic - nothing is transferred. So it would appear there is an interaction somewhere between the SSH and SAMBA protocols that is magic. So this got me thinking about my perceived transfer behavior of a 'slow' transfer followed by a 'fast' transfer. Turns out, if you watch the traffic go across the interface, the 'fast' transfer isn't actually transferring anything...it is pulling from a local cache and perhaps doing a diff on the file system (because about 10 packets are exchanged).
To verify the transfer is slow EVERY TIME, I did the http file download several times in a row, and each time the rate was similarly slow. To test this, I moved the test.file into a web-available directory within the samba mount, then opened the file, choosing to save as...test.file to the desktop:
and achieved about ~44Mbps
Next, I opened the file from the mount point
and saved as a desktop
and achieved the now-standard 8 Mbps. I immediately did the same process several times, each time getting the same download speed. Based on this testing, I would suggest not giving any credence or thought to the 'increased time on the second download' as mentioned in an earlier note. Again, disregard the previous posting about possible caching issues and faster transfers when issuing the scp command multiple times.
Do note that this test was performed on a different external workstation for display-back purposes. Also connected at FE speeds, this workstation had an immediate gui versus doing a displayback from the laptop which was unbearably slow.