I have a two folded question:
1. I have noticed during the WAAS boot cycle from a log file that many variants of TCP are loaded by the software (Nevada,High, BIC, etc). From reading and experience I have seen different vendors deploy different variants of TCP based on their implemented TCP stack. These all have different behaviors based on the environment in which they are used - same are ideal for high packet loss, high latency networks, with high bandwidth (LFNs, etc). Are these used interchangeably or in some dynamic manner by the WAAS during specific scenarios? Are there specific application AOs that take advantage of different variants of TCP? In addition, how does the TCP adaptive buffer function work exactly?
2. I normally don't change much on a typical WAAS installation besides custom policies for specific applications. According to what I have read, the TCP parameters are fine for most scenarios. Is there a best practice approach or some type of BDP calculation that engineers should consider during a typical WAAS install to ensure that we are configuring these devices to their maximum performance from a TCP standpoint? Cisco has done such a great job on engineering the defaults into WAAS it doesn't leave a lot for the installer to change unless a specific application or link needs special attention. I want to make sure I am not missing anything.
Your input is greatly appreciated.
Senior Solutions Architect
CCIE 17082 (R/S)