01-12-2014 12:03 AM
Hi,
We have an ACE Module (Version A2(3.3)).
We need to configure the probe for the below service:
http://10.1.1.10:8080/rws/MainService
Application is crictical so we are concering about the quick response on failover or delay in the server's service. Below is the sample configuration:
"
probe http SERVICE
port 8080
interval 3
passdetect interval 5
faildetect 1
receive 5
request method get url /rws/MainService
"
I am just confused in the No. Probes skipped. What is the effect of probes skipped and how can we avoid this?
What is the best settings for open timeout and receive timeout with the combination of time interval etc. Please suggest the best solution.
How the current sample configuration work?
Regards,
Anser
Solved! Go to Solution.
01-12-2014 06:03 AM
Hi Aner,
Regarding the counter "Number of probes skipped" below is the explantion from user guide and is explained beautifully.
The time interval between probes is the frequency that the ACE sends probes to the server marked as passed. You can change the time interval between probes by using the interval command. This command is available for all probe-type configuration modes. The syntax of this command is as follows:
interval
secondsThe
seconds argument is the time interval in seconds. Enter a number from 2 to 65535. By default, the time interval is 15.The open timeout value for TCP- or UDP- based probes and the receive timeout value can impact the execution time for a probe. When the probe interval is less than or equal to these timeout values and the server takes a long time to respond or it fails to reply within the timeout values, the probe is skipped. When the probe is skipped, the No. Probes skipped counter through the
show probe detail command increments.
The configuration of parameters depends upon your requirement but best practices are listed in user guide. If you have lots of servers and lots of probes it is not advisable to use short "interval time" which will make the ACE probe servers already "passed" very frequently. You can do this for critical application as you said.
Along with interval vlaue which dicatates how frequently ACE probes a passed server, you shall also look at "passdetect" interval which dictates how ACE will probe the server which failed the probe and after how many tries it will mark it as pass if the server replies to the probe. If it is critical you can configure the count as 1 or 2. By default the passdetect interval is 60 seconds and passdetect count is 3.
You should also look at faildetect which dictates how many times ACE will probe the server which "failed" the probe before marking it as failed. Default is 3. And then you have open timeout and receive timeout and their impact is explained above.
I would understand these values and configure them accordingly to get the optimum results. In your case i would reduce the interval, passdetect interval/count but how much is what you need to decide.
Regards,
Kanwal
01-13-2014 07:26 AM
Hi Anser,
The interval time should be more than open/receive timeout values otherwise if a server fails to respond to a probe with in the time specified the probe would be skipped as explained below. If you are sure that it will not happen then the above looks ok.
Regards,
Kanwal
01-12-2014 06:03 AM
Hi Aner,
Regarding the counter "Number of probes skipped" below is the explantion from user guide and is explained beautifully.
The time interval between probes is the frequency that the ACE sends probes to the server marked as passed. You can change the time interval between probes by using the interval command. This command is available for all probe-type configuration modes. The syntax of this command is as follows:
interval
secondsThe
seconds argument is the time interval in seconds. Enter a number from 2 to 65535. By default, the time interval is 15.The open timeout value for TCP- or UDP- based probes and the receive timeout value can impact the execution time for a probe. When the probe interval is less than or equal to these timeout values and the server takes a long time to respond or it fails to reply within the timeout values, the probe is skipped. When the probe is skipped, the No. Probes skipped counter through the
show probe detail command increments.
The configuration of parameters depends upon your requirement but best practices are listed in user guide. If you have lots of servers and lots of probes it is not advisable to use short "interval time" which will make the ACE probe servers already "passed" very frequently. You can do this for critical application as you said.
Along with interval vlaue which dicatates how frequently ACE probes a passed server, you shall also look at "passdetect" interval which dictates how ACE will probe the server which failed the probe and after how many tries it will mark it as pass if the server replies to the probe. If it is critical you can configure the count as 1 or 2. By default the passdetect interval is 60 seconds and passdetect count is 3.
You should also look at faildetect which dictates how many times ACE will probe the server which "failed" the probe before marking it as failed. Default is 3. And then you have open timeout and receive timeout and their impact is explained above.
I would understand these values and configure them accordingly to get the optimum results. In your case i would reduce the interval, passdetect interval/count but how much is what you need to decide.
Regards,
Kanwal
01-12-2014 06:08 AM
Hi Anser,
Also, please upgrade your device as you are running an old version. I also don't see any expect response confihgured. Without expect response configured, any response from server will be marked as failed.
And i see you have interval as well as receive timeout values as same and that might be the reason why you are seeing "probes skipped" counter going up. If your server is taking time to respond (more than 5 seconds) then a probe would be skipped since ACE is waiting for an answer for last probe. In this case increasing interval time should solve that problem.
Regards,
Kanwal
01-12-2014 10:46 PM
Hi Kawaljeet,
Thanks for the response. Below config seems effective to me as per the server response & the requirement of the application:
probe http SERVICE
port 8080
interval 3
passdetect interval 3
faildetect 2
open 5
receive 5
request method get url /rws/MainService
Appreciate your feedback.
Regards,
Anser
01-13-2014 07:26 AM
Hi Anser,
The interval time should be more than open/receive timeout values otherwise if a server fails to respond to a probe with in the time specified the probe would be skipped as explained below. If you are sure that it will not happen then the above looks ok.
Regards,
Kanwal
01-13-2014 10:33 PM
Hi Kanwal,
Thanks for the response. Just quick query related to configure them, Our serverfarm currently in working so I am wondering to do the following changes in the running services:
1- Do I need to stop service while changing the algorithm from LEASTCONN to ROUNDROBIN? Or will it override without any impact?
2- Do I need to stop service while adding the real servers in the existing running serverfarm?
3- Add and change the running PROBE settings.
What is the best approach as we have services in Production.
Regards,
Anser
01-14-2014 04:40 AM
Hi Anser,
By default, predictor is round-robin so changing it to least connection should override round robin.
You don't need to stop service to put the real server in serverfarm. If predictor is "least connection" then you may also like to configure slow-start.
Regarding the probe you should be able to configure the changes without taking server out of service.
Regards,
Kanwal
01-14-2014 05:13 AM
Hi Kanwal,
It is already configured for 'leastconn'. We need to change it to 'roundrobin'.
Do this change effect the running service? Do we need to stop service of rserver in the serverfarm during change in the algorithm?
What is the effect of change in the running connections & the new connections (after change)?
Regards,
Anser
01-14-2014 10:04 AM
Hi Kanwal,
It should return to default round-robin. There should be no impact on traffic or established connections. Just the new connections would be loadbalanced depending upon the predictor which would round-robin.
Regards,
Kanwal
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide