We're attempting to configure a Core Cluster for the WAE-612 and got confused when entering the access information for the CIFS file server that his core cluster will export. The doc is saying that we can only enter access info for 1 file server. I was under the impression we can include more than 1 file server in the core cluster. Can someone please clarify this? thnx..
is correct you can enter access only for 1 file server, is these info are local access.
You can enter domain user/pwd for multiple file server(s).
Thanks for your response... So, we're supposed to create 1 core cluster for every file server? - i.e. if we were to export 20 file servers, we would have to create 20 core clusters?
You only need one core cluster per data center, a core will cover CIFS servers within roughly 25 ms range from the cores. If you are doing CIFS Server autodiscovery (standard since early 4.0.7), then you don't have to export the file servers manually.
Also, you only need to have a user name/pw if you are prepositioning files from a server or set of servers.
Hope that helps,
Hi, Dan...What happens if we do not create the cluster and export the server but instead, just let a remote user connect to that server. I am just trying to conceptually understand what's going on here..
In WAAS 4.0.x, to get L7 acceleration for the CIFS protocol, you need to have an Edge, a Core and a connectivity directive between the two. If you don't have a WAE with core services running, but there is a WAAS WAE intercepting close to the server and one close to the edge, you will get L4 optimization (TFO, LZ and DRE). This will optimize the WAN link at the TCP, but you won't get the CIFS Application Optimizations.
Hope that helps.
Thanks for your "right on the money" reply. We're only optimizing the CIFS protocol (openning word doc, saving word doc, etc)..I pasted below the current stats for the tfo savings..Is there a similar command to determine if L7 acceleration is taking place and what the savings are? Thnx again..
AEGFD512A#show statistics tfo saving
Application Inbound Outbound
Bytes Savings 3902420854 3342936373
Packets Savings 0 0
Compression Ratio 4.7:1 4.0:1
Bytes Savings 125275083 25187762
Packets Savings 0 0
Compression Ratio 2.3:1 1.3:
On the Edge WAE, you will see the L7 saving in those statistics. For more indepth stats on CIFS, go to the edge device home page in the CM GUI, then click on the GUI button for the device.
You can then go into the Monitoring sections and see real time statistics as you copy over data, etc.
Hope that helps,
Would it be possible to make an edge site as both, an "edge" and a "core", so that both, upload and download cifs traffic is cached? What would be the trade-off?
You can do both on the same device, however you need at least 2 GB of RAM on the device to enable both the core and the edge. Also, cores cannot be enabled on NME-WAE-5xx models. You also need connectivity directives in both directions, so it will get more complex as you add more sites.
Another option would be to wait for 4.1.x and go with transparent CIFS. It will eliminate the need for Core/Edge/Connectivity Directives.
Hope that helps,
Dan, you have been increadibly helpful and your answers are right on the money, so I appreciate it very much. We're configuring a large deployment of wae's (about 40 wae's 512 on the edges and a couple of 7326's on the core), so we're under the gun and, unfortunately, folks with your waas expertise are few and far between.
We're not getting the benchmark numbers even on the download that we were hoping (3 to 7 times higher than local). The smb is not an issue and I am pretty sure the cifs traffic is getting accelerated, so I am at my wit's end as to what else I can tweak to improve the performance. I have dinked around with the wan setting on the cifs tunnel that I have created. It's a T2 connection between the edge & the core, so I have changed it from 1544kbps to 6312kbps, however to my surprise, the performance got degraded.. What do u recommend the wan speed should be set to and is there any other setting I can tweak to improve the performance. Thanks much.
Are the 2xT1 both active or are they active/standby? If they are both active, you should match the bandwidge AND latency to the link speed. If they are active/standby, then you should match a single link.
Also, to see if things are being optimized via CIFS, you should see the session in "sh cifs session list", not on the edge in the sessions as 139/445. If it's not listed here in "sh cifs session list", then CIFS traffic is not running through the AO. Also, you should see the 4050 tunnels optimization stats in the GUI going up.
Are the cores close (within 25 ms) of your file servers? Did you end up exporting them or use CIFS autodiscovery?
Remember to double check your speed and duplex settings, that can really bring down perf if they are mismatched.
Thanks for the reply.
I will double check this...
<< Also, to see if things are being optimized via CIFS, you should see the session in "sh cifs session list", not on the edge in the sessions as 139/445.>>
Yes, I have run this command on the edge wae and was seeing sessions as 139, so I am pretty sure the CIFS traffic is getting accelerated.
Where in the edge GUI should I be seeing the 4050 tunnels? - in the WAFS log?
yes..here is the ping from the core to the file server: 5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/mdev = 1.165/1.235/1.271/0.036 ms
I have created 2 connectivities - one's for a T1 line connection and the other is for 2 T1's connection. I have exported the file server as part of the first connectivity, but not the second one (we're running 4.0.19 code level on our wae's). Exporting the file server did not seem to make much difference for performance.
Dotted the i's and crossed the t's on the settings all the way from the core to the edge (MPLS on the WAN)..
What is the realistic expectation for benchmark remote WAAS performance when wccp is turned up as compared to when it's not? - i.e. what should the customer be expecting in terms of savings? Thnx...
I hate to say it depends, but it really does. If you are copying CIFS files repeatedly, you literally get awesome response times and bandwidth reduction. If your exchange servers have encryption enabled, you won't see any reduction on that traffic at all until a future release. If like most normal enterprises there is lots of similar http traffic and a large amount of CIFS traffic, I have see anywhere from 30-80+% reduction in TCP traffic. If you links are not very congested, application responses may not be as dramatic, however you should be able to fit a ton more data across the links.
However, CIFS acceleration should be "eye-popping" to low bandwidth, med-high latency links.
What about UNC connections to PCs that utilize CIFS for file transfers? I have not been able to see an improvement in response for these types of transfers between hosts.
Is there anything that can be done to improve this?
Accessing files via UNC still leverages CIFS, so the opportunity for acceleration is there. I would start by verifying that the sessions are being handled by WAFS. From the WAE running the WAFS Edge service, use the following command to see the list of sessions being handled by WAFS:
show cifs session list
And the following command to see the file server auto-discovery entries:
show cifs auto last
We have used the "show cifs session list" command to verify that the CIFS traffic to/from the edsge site is getting accelerated. However, lately we have not been seeing any sessions - does that mean all the traffic is going to the local WAE?
It means that the CIFS session is not being accelerated by WAFS. Can you verify that the destination file server has been auto-discovered:
sh cifs auto last
unfortunately, our customer insists on comparing the wae response to the local one, although he's willing to accept a post-optimization remote numbers that would not exceed 2 times the local ones. We're getting good results when compared to the pre-optimization remote numbers, but not the local ones. In addition, he's also setup a high end PC (Pentium 4, 2.80 GZ) that is used for the local testing (reading files, writing files, opening files, etc). Wouldn't the type of this machine skew the local numbers even more? - i.e. the local speed would be directly related to how powerful the testing PC is?