I am having problems with HTTPs load balanced connections in one arm mode. When we test direct to the servers in the serverfarm, it is 4 times faster than connections through the VIP to the same servers. I have configured ICMP probes direct to the servers, and an HTTPs probe in the serverfarm itself. I'm using predictor 'least bandwidth. Attached below is the config.I have compared this config to another client site utilising the same design and it looks fine. Also, when we perform the same test at the other client site, we have no problems.... connections direct to the servers and through the LB are the same speed.
First thing to be checked would be whether the ACE is the device responsible for the delay or not. With this in mind, I would recommend you to do a traffic capture on the ACE vlan while doing a test connection. With this capture, you should be able to see where is the delay coming from.
If you need help analyzing this, you can always open a TAC service request
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...