Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

CSS TCP balancing not working well

Hello, we are load-balancing a service that is hosted by 2 servers. Each server has 2 network cards, and for redundancy issues we have each NIC configured on two separated Vlans.(Vlan3 and Vlan5 )both routed by the CSS. The service is a TCP socket that transfers data. So, we have those services configured in a group that has a VIP address on Vlan3 range. ( 192.168.3.x). When we run the services through Vlan5 ( Service's IP 192.168.5.x ) It works. If we shutdown the cards configured on Vlan5 range and it starts passing data flow in the same Vlan as the VIP ( 192.168.3.x ) It does not work. The ACL's are configured the same for both services, so it should work but I think is an issue with the service running on the same IP range or Vlan as the VIP group address. How can I solve this problem ? Will rate. Thank you a lot.

Cisco Employee

Re: CSS TCP balancing not working well

First, let me say that in 10 years of experience with the CSS, 2 NIC cards as always a pain to setup/implement/troubleshoot for at the end very little advantage.

Then, if you really want to know what is failing in your case, you should probably sniff the traffic to see what's going on.

The fact that the server and the vip are in the same subnet is not a problem.

However, if the client is in the same subnet as the server, this is a big concern because the server response will bypass the css and go directly to the client.

For this kind of setup you need to configure source ip nating with a group.[ check the configuration guide on how to setup a group ].

Note, if you configure src nating, your servers will only see 1 client ip - the one used for nating.

You lose stats about which client comes in.