I'm load balancing between 3 servers each running a web server via a single virtual address.
The problem I'm experiencing is one of the servers, whilst it still listens on port 80, stops serving content. When this happens, the LocalDirector doesn't appear to notice that there is a problem, and over a relatively short time, seems to divert the majority of traffic to this faulty web server as I use leastconns to balance the connection type.
Can you confirm that this is appropriate behaviour? I'm going to change it so I use roundrobin instead of leastconns which will (although not solve the problem) help lessen the impact, , but confirmation that is expected behaviour would help me sleep a little better!
Perhaps there is a more suitable Cisco product to circumvent this issue?