I am using ACE Module for Server Load Balancing in 6500. I have configured it in Routed mode.
I have configured a VIP which works well from Client and Server Side. Whenever a Client sends traffic to VIP, its load balanced in Round Robin Fashion by ACE to two servers. Also, When servers sends traffice to VIP, it gets reply from the other Server.
When traffic is sent from Client side to VIP, it gets response with VIP as source Address
When traffic is sent from Server Side to VIP, it gets response with Real IP of Other Server
I need the following result:
From Server Side, Whenever a Server Pings to VIP, it should get the reply with Source Address of VIP.
I think this is an issue because there are two service policies that each has a valid class-map that recognize ICMP packets and both those service policies are applied to the same interface. I would think you could test by removing the icmp protocol line from the mgmt class-map and see what results you get. I suspect that the ACE is processing the ICMP packet as part of the mgmt class-map first.
The problem is the NAT you are using, it is not good, that is why you are having an asymmetric flow. You have a layer 2 asymmetric flow, I will assume a couple of variables, so please correct me if I am wrong, but must likely this will solve the problem:
- The servers are using the ACE as default gateway, since you do not have a NAT applied and the external clients are working that means that there is no a L3 devices in between.
- The switches between the ACE and the backend servers have the L2 information of the servers, so without NAT that is the switches can see each other.
Now, back to the ACE, in order to correct this, I will assume that the rservers you have configured are the ones, that are opening connections to the VIP, so in this case, we need to NAT the traffic through the outgoing interface, in other words, the NATPOOL needs to be configured on the interface that the ACE will use to send the traffic to the servers, so the configuration should looks like this:
policy-map multi-match L4_LB_VIP_PMAP
loadbalance vip inservice
loadbalance policy L7_VIP_PE_PMAP
nat dynamic 1 vlan 130 -------------------------------------- > HERE
interface vlan 130
description Server Side
ip address 10.1.3.6 255.255.255.0
alias 10.1.3.252 255.255.255.0
peer ip address 10.1.3.5 255.255.255.0
access-group input PE
access-group output PE
service-policy input PE-SERVER-PMAP
natpool 1 10.1.3.X 10.1.3.X netmask 255.255.255.0 pat --------------------------> here
You need to replace X for an available IP on the server's range, now, you need to configure the Natpool first and then apply it to the class under the policy multi-match.
Let me know, it should works. Since you are not doing NAT, what happen is that the backend server knows the MAC address of the other Server that is opening the connection, so when it replies back, the destination MAC is the Server one, as soon as the packet hits the L2 switch, this will send the traffic throught the port when that MAC address was learned, and the packet will end to the Client Server which will drop the packet.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...