we use the following configuration on a csm to monitor a server farm and I'm wondering how exactly the probe timers work.
nat client natpool1
real name serv1
real name serv2
probe probe1 script
So in my eyes the probes are sent every 5 seconds. When a probe isn't answered within one second it's marked as failed. If two probes are failed (retries 2) the real server is marked as down.
Is this correct?
In a network trace I see a different behaviour: Probes are sent every 5 seconds. If a real server goes out-of-service I see a probe which is not answered and the next probe is sent after 10 seconds (I expected 5 seconds). 5 seconds later the real server is marked down in the switch log.
Sets the interval between probes in seconds (from the end of the previous probe to the beginning of the next probe) when the server is healthy.
Range = 2-65535 seconds
Default = 120 seconds
Sets the number of failed probes that are allowed before marking the server as failed.
Range = 0-65535
Default = 3
Sets the time between health checks when the server has been marked as failed. The time is in seconds.
Range = 2-65535
Default = 300 seconds
Sets the maximum time to wait for a TCP connection. This command is not used for any non-TCP health checks (ICMP or DNS1).
Range = 1-65535
Default = 10 seconds
There are two different timeout values: open and receive. The open timeout specifies how many seconds to wait for the connection to open (that is, how many seconds to wait for SYN ACK after sending SYN). The receive timeout specifies how many seconds to wait for data to be received (that is, how many seconds to wait for an HTTP reply after sending a GET/HHEAD request). Because TCP probes close as soon as they open without sending any data, the receive timeout is not used.
When sniffing, you should see a probe each 5 seconds. When a probe fails for the first time, a second probe should be send after 5 seconds. when this probe fails too, the server is put out of service.
I took again a trace with wireshark and while the server fails I saw the following packets:
Second 0: TCP Handshake and LDAP Bind Request from CSM to Real Server -> Real Server acks the LDAP Request but does not send an answer because LDAP is failed
Second 5: TCP FIN from CSM to Real Server
Second 10: Next TCP Handshake and LDAP Bind Request from CSM to Real Server -> same behaviour as above
Second 15: TCP FIN from CSM to Real Server
Second 15: Syslog Message with health probe failed
Second 315: Next TCP Handshake and LDAP Bind Request from CSM to Real Server and Bind Response from the Real Server which is alive again.
So in my eyes the receive timer does not work as expected because the csm waits 5 seconds (instead of 1 configured) until it closes a session where it did not receive a ldap response.
Do you have any idea concerning this behaviour?
Further on, does the receive timer include the tcp handshake time or does it start when the handshake is done? In the last case is it correct that we should use also the open timer to prevent long tcp handshake times?
like said in my previous post, the receive timer is only used when using a HTTP probe.
"The receive timeout specifies how many seconds to wait for data to be received (that is, how many seconds to wait for an HTTP reply after sending a GET/HHEAD request). Because TCP probes close as soon as they open without sending any data, the receive timeout is not used."
You could use the open timer to prevent long TCP handshakes, because that one will take into account the time needed to receive the SYN/ACK after the SYN is send.
So is there any possibility to configure the receive timeout (which seems to be 5 seconds) for the ldap response respectively responses from applications other than http where the tcp handshake has finished correctly?
This document will provide screenshots to outline the steps to setup
TACACS+ configuration to ACI and also the configuration required on
Cisco ACS server. Please find the official Cisco guide for configuring
TACACS+ Authentication to ACI:
Is it supported or NOT supported? It's a frequently asked question.
Before APIC, release 2.3(1f), transit routing was not supported within a
single L3Out profile. In APIC, release 2.3(1f) and later, you can
configure transit routing with a single L3Out pr...
Cisco Documents are usually accurate, but when it came to the document
on Cisco APIC Signature-Based Transactions it was slightly off the mark.
This document is for those novices to API like me who cant seem to
figure out how to go about performing signat...