I have multiple (3+) Cat3750 switches forming a stack, two ESX servers with multiple connections (2+) and am experimenting with VMotion (dynamically migrating virtual machines to another ESX host). Both ESX servers are connected to the same stack via etherchannel, let's say ESX1 to ports G1/0/1 and G2/0/1 (Po1) and ESX2 to ports G1/0/2 and G2/0/2 (Po2). Switch #2 is the stack master. When I do a migration from ESX1->ESX2, downtime is minimal (sub 1 second). If, however, ESX2 (destination virtual host machine) is connected to an etherchannel with physical ports G1/0/2 and G3/0/2 (none of the etherchannel physical ports are on the stack master switch), downtime of the virtual machine is huge (virtual machine's MAC address, as shown by the "show mac address-table", stays on Po1 until mac address-table agging occurs or until there is a communication from virtual machine to the outside world). However, when the etherchannel to the destination virtual host machine (ESX2 in my example) includes port on the stack master, stack learns the new location of the MAC address very fast. VMotion is just an example (I didn't test it, but I assume it would be the same if I had plain client roaming from one place in the network to the other, if, in the end, it comes into the switch stack via etherchannel interface). The issue stays the same if there is only one port (on one of the stack members) in the etherchannel, however, if there is just a normal physical (non-etherchannel) port, everything works fast
Is this by design? Does anyone know (link to the documentation would probably suffice) what is the difference on cross-stack etherchannel when ports in the bundle include or exclude port on the switch master in the stack (especially with regards to client roaming around the network)?