I have ACE 20 in 6500 configured in routed mode, similar to configuration described here: http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_Routed_Mode_on_the_Cisco_Application_Control_Engine_Configuration_Example
There is a context that has the following VLANs assigned:
Vlan 100 - management interface: 10.0.100.1/24
Vlan 101 - client side interface: 10.0.101.1/24
Vlan 102 - server side interface: 10.0.102.1/24
I have VIP configured (10.0.101.10) and two rservers (10.0.102.11 and 10.0.102.12) in a single server farm.
Servers have 10.0.102.1 as their default gateway.
Load balancing works fine, client connection reaches the rservers via VIP.
My problem is that I cannot connect to real servers behind ACE (10.0.102.11 and 10.0.102.12) directly, without load balancing involved.
The ACE just blocks all the traffic not destined to VIP, although ifaces vlan101 and vlan102 have "permit ip any any" input ACLs.
What else do I need to configure to make this work?
It worked when interfaces 101 and 102 were in bvi, but I need to have multiple contexts with shared vlans.
Answers are inline:
correct me if I am wrong, ACE separates contexts' traffic, right?
In other words, I won't be able to route this traffic (10.0.102.0/24) via vlan 101 (client-side vlan) interface in Admin context.
Every server has to have its default gateway set to the corresponding vlan102 (server-side vlan) interface of the corresponding context, right?
-->Right, Now I understood your design
The idea behind that design was to control services consumption of ACE resources (via resource classes for different contexts) without implementing vlan groups for every context (what is required in bridged mode).
My recommendation will be using /32 static routes on the 6K instead of /24 route, for example if you have 10.0.102.2 is reachable through Context-A and 10.0.102.3 through Context-B.
route 10.0.102.2 255.255.255.255 10.0.101.1
route 10.0.102.3 255.255.255.255 10.0.101.2
Where 10.0.101.1 is Context-A VLAN 101 IP address and 10.0.101.2 is the VLAN 101 IP on Context-B.
Hope that help,
The servers can have only one gateway, then all your servers' traffic will be going over one context, so I am not sure how you are planning to have one context per service, please explain your plan to deploy that.
The best practice design, is to have management VLAN to access the servers, by adding another interface to the servers and allow the admins to access them through the 6K, since no need to pass through the ACE and consume its resources for management access.
Do you mean how to allow the clients to access the real servers IP addresses through multiple contexts? If yes, please explain why you need this. I could not think of any situation where you need to allow the clients to access the real IP addresses over multiple contexts.
The MSFC will allow these two VLAN to go over the port-channel trunk toward the ACE, but it will not redirect all the traffic on these VLANs to the ACE, the 6K will still perform normal L2 (ARP) and L3 (routing) lookup before sending the traffic.
With the current configuration the clients request will be send to the servers directly by the 6K and it will not go through the ACE for sure even you have added static route, since the server subnet is directly connected and has lower cost, but the server reply will be sent to the ACE and the ACE will drop it due to IP normalization feature.
If you need this communication to work through the ACE, do the following:
1- Remove the VLAN 102 interface from the 6K configuration.
2- Add route 10.0.102.0 255.255.255.0 10.0.101.1 on the 6K
3- Keep the 10.0.102.1 IP address as the servers gateway.
This way the communication should work smoothly, and the capture tool should show you both legs of the communication.
Let me know if that help, and mark this one as correct answer.
What is the client default gateway and where is it configured?