We have a pair of ACE modules running c6ace-t1k9-mz.A2_1_1a.bin. They are a redundant pair, one in each of two 6513s.
A few weeks ago we had to reboot the active ACE as none of the load balanced services were working through it. We could access the CLI via the supervisor and it looked ok but was reporting probe failures on the rservers and we couldn't ping anything in the arp table, although the arp entries were correct, even after clearing the arp.
We switched to the standby ACE which restored service, reloaded the primary and then switched back. Everything was ok for a few weeks but we have a similar problem now. We've switched over to the standby ACE and everything is working but we haven't reloaded the other card yet.
We're seeing probe failures to rservers in each of the two contexts running on the ACE but the same rserver probes are working fine on the active ACE. We can ping a few of the rservers from the affected ACE but not all of them. The arp and mac address entries look ok.
Before we start looking at sniffer captures I'd like to know if there's anything we can check from the CLI that may give us an indication of what the problem is.
Probe failures may be observed intermittently in a configuration with ICMP or UDP probes. This behavior may occur when the probe time interval is set too small (for example, 5 seconds) and the number of probe instances is greater than 4000.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...