We are trying to put in production a new application but in our lab, a weird thing is happening.
We have an ACE20, FT configuration in one arm mode with several contexts and several serverfarms on each context.
In one of our contexts, we had configure 2 serverfarms, A and B, exactly the same. 4 realservers, 1 out of service and 3 inservice. The real servers are the same for both serverfarms. We had configured the same sticky and class map policies and the same access list. The only difference is the VIP that we use for the clients to reach the application.
The weird thing is that in serverfarm A, the ACE loadbalances perfectly. But in ACE B, 99% of the connections go to only one realserver in the serverfarm and the last 1% are distributed between the other 2 realservers in inservice state.
1) Probes, we verified and they are not failing. The realservers are permanently OPERATIONAL.
2) Sticky to the same server: do you configure that? My understanding was that sticky ip-netmask 255.255.255.255 will make every different IP client to go to a different server, not just only one. Not sure if there is something else that we should check to be sure that we do not have something misconfigured.
3) We are almost sure that they come from different IP addresses but that is something we are checking right now.
Moquery is the command line cousin of Vizore, it's very helpful and efficient sometimes during the troubleshooting. This article aims to provide moquery cheat sheet to the users for some most common seen scenarios.
Here is the checklist before customers/partners contact Cisco TAC:
Firmware Version of APIC and Switch
Download Switch and APIC techsupport logs
Problem description (Symptoms with details)
Business impact (eg, what kind of services...
moquery usageAPIC moquerySwitchmoquery
This document discuss a common issue observed during the VMM integration & VM workload migration to ACI fabric.
VMware Virtual machines are hosted in Cisco UCS-B seri...