cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1302
Views
13
Helpful
7
Replies

Cisco 11501 load balancing

net buzz
Level 1
Level 1

Hi!

I have configured two Cisco CSS 11501 in Active/Stanby mode for load balancing operations on two servers. The two servers are configured on a cluster mode.

The system is working but there is an issue that one of the servers is getting more requests than the other. This is causing the server to have a higher utilisation and this is slowing down its performance.

Is this occuring because of the load balancing algorithm I have used?

I have checked and both servers have a weight of 1.

Please find attached the configurations, topology and outputs.

Thanks and regards,

Alvin

7 Replies 7

Gilles Dufour
Cisco Employee
Cisco Employee

The source of the problem is : "advanced-balance sticky-srcip"

This has precedence over "balance leastconn".

You actually tell the CSS to always send the same client to the same server.

Because of mega proxy, you can have thousands of clients behind a single ip.

With sticky source ip, all those clients get sent to the same server.

There is nothing you can do about this except removing sticky soruce ip.

For HTTP traffic, you can replace this with arrowpoint cookies.

But it only works with HTTP.

Gilles.

Thanks Gilles.

If I replace the "advanced-balance sticky-srcip" with the "advance-balance arrowpoint cookie", will it still have a precedence over the "balance leastconn" algorithm.

Can the configuration work without the advance-balance method?

Thanks and regards,

ALvin

Hi Alvin,

Since you've got two redundant threads going on, I'm moving my answer to your follow-up question to me to this thread so everyone contributing can see and take part in the same discussion.  Please disregard the other thread for the same question.

As Gilles and I have pointed out, your configuration of sticky will cause uneven load balancing.  Even if you change from source IP to cookie, sticky will still take precedence over the load balancing predictor.  If sticky did not take precedence over the predictor, then there would be no point in configuring sticky.  The whole purpose of sticky is to keep a client on the same server for the same VIP.

So the CSS works like this:

1)  New client connection comes into CSS destined to VIP

2)  CSS checks to see if there is a sticky entry for this client

3)  If sticky entry exists, client is sent (stuck) to the same server

4)  If no sticky entry exists, CSS will load balance client according to predictor (ie. leastconns, round-robin, etc.)

So, in your case, you would need to determine what is more important in your environment:  a)  even distribution of connections across real servers, or b) maintaining sticky for clients.

I hope this helps clear it up.

Sean

Thanks Sean.

I get your point.

But I previously used a configuration without the "advance -balance method".

When calling out forms, I got an error message from the ORACLE application stating that a network error has occured. Please see attached.

My question is that will a "leastconn" configuration without the "advance-balance method" be able to handle all requests starting with:

http://172.22.72.25:7778/.......

where 172.22.72.25 is the virtual ip.

Regards,

Alvin

Hi Alvin,

the uri http://172.22.72.25:7778/ is only used to match a content rule.  Only after a content rule is matched, will it check for a sticky entry, and if none exists, it will use the load balance predictor such as leastconns.  So yes, you could use leastconns with that uri.

As for your error, after you configured sticky (advanced-balance), did the error stop?  If so, then your application clearly requires sticky.  And if your application does require sticky, then you will not be able to achieve perfectly even load balancing across your servers.  The CSS is at the mercy of your sticky configuration.

If you need sticky, and are overwhelming one server due to this, then you would need to either increase the capacity of your server(s) or add more servers to the server farm.  This may not be the answer you're looking for, but look at it this way.  Currently, you have two servers being load balanced, and one is getting more connections than the other causing degraded preformance on that server.  What will happen if one of the two servers fails and ALL connections have to go to only one server?  You could suffer a full or partial outage, even though the VIP is up and one server is online.  Ideally, either one of the two servers should be able to handle the load if one of them fails.  Now, if you add a third server to the content rule, then perhaps each should have enough capacity such that any two of them could handle the full load on the VIP.

For example:

If you are load balancing two servers, they should each run, under normal conditions, not more than 50% of capacity.

If you are load balancing three servers, they should each run, under normal conditions, not more than 65% of capacity.

Sean

Alvin,

some website requires each client to stay with a single server.

This is why someone needs to use 'advanced-balance' and apparently your site has this requirement.

You can try 'advanced-balance arrowpoint-cookie'.

After the modification,  all active users will probably have to relogin.

This method also has precedence over 'balance leastconn' but at least stickyness is not done on source ip address which can be shared by many clients.

Also, make sure you configure the command 'url "/*"' before configuring the new advanced-balance method.

Regads,

Gilles.

Dear Sean,

Please see the attached error.

Regards,

Alvin

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: