I have a question regarding the relationship between stickiness based on IP/netmask and the load balancing predictor in an ACE module running ACE 2 1.0.
Let's say I load balance a Web server with stickiness based on IP address and a /24 netmask. The first time a client with address 22.214.171.124 makes a request, the predictor chooses a real server (let's call it server A) and an entry is added to the sticky table: "src address 1.2.3/24 ==> server A". When the same client comes back for further requests before the sticky timeout expires, the request will be directed to real server A.
If another client with IP address 126.96.36.199 makes a request (again before the sticky timeout expires), my understanding is that since there's an entry in the sticky table for 1.2.3/24, the request will be directed to real server A and the predictor will not be used at all. Is this right?
If it's the case, I guess the only reason to use a nemask smaller than /32 would be to save space in the sticky table?
In real world examples, what netmask are people using when doing stickiness based on IP address?
the mask in the "sticky ip-netmask 255.255.255.0" command. It means that if a client 10.0.0.10 is sticked on server01 all of subsequent clients from the 10.0.0.0/24 subnet will be sticked on the same server.
This also explains why most of the clients get on server01, maybe most of the clients are on the same subnet ?
as from my personal experience, most of the sticky use cases work better with /32 stickiness. That's why stickiness often is used to resolve session issues (if your client session starts on a server you should always continue working on that server as often webservers don't communicate session information among them). /24 stickiness could be useful in some cases when you have a /24 subnet that is the nat subnet of a proxy. This means that possibly thousands of clients could arrive to your VIP from any of the 253 ip of the subnet. To be sure to have those clients' sessions on the same webserver you should set the stickiness on the whole /24 subnet.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...