ā09-27-2011 05:08 AM - edited ā03-07-2019 02:27 AM
What is the most common reason of TCP rst packet sent by either server or client ? I came to know from some online forums that it is due to session timeout? If it is due to session timeout, I believe TCP work on technology like packet retransmission,So in this case why not retransmission instead of reset?
ā09-27-2011 09:07 AM
Hello,
The TCP RST flag is meant to indicate that the connection should be immediately terminated if not terminated already, mostly because of a fatal error. Most commonly, the RST flag is seen in these situations:
Regarding session timeouts, a RST flag is actually sent if the connection has already been closed after a period of inactivity from one side, and the other side suddently comes back and wishes to continue the session as if nothing has happened. There is nothing wrong with this usage.
Please feel welcome to ask further.
Best regards,
Peter
ā09-27-2011 10:27 AM
Hi Peter
If a client sends last data segment with a fin flag to server but server didn't receive data segment within stipulated time frame and resend previous segment to client assuming that previous segment somewhere dropped in the path, is it possible for client to send a RST packet in this situation?
ā09-27-2011 12:38 PM
Hello,
I believe that such an occurence is possible. Per RFC 1122, Section 4.2.2.13:
A host MAY implement a "half-duplex" TCP close sequence, so that an application that has called CLOSE cannot continue to read data from the connection. If such a host issues a CLOSE call while received data is still pending in TCP, or if new data is received after CLOSE is called, its TCP SHOULD send a RST to show that data was lost.
There is one issue, though, that I am not sure of. Assume that the lost FIN segment from the client to server was sent with Seq=X, Ack=Y. The presence of the FIN flag means that the client process called the CLOSE on this TCP socket. According to the aforementioned RFC, any arrival of data from the other side should elicit a RST response from the client.
However, if the size of the data in the FIN segment was N bytes, the next acceptable segment from the server to the client should have the Ack set to X+N. However, the resent segment from the server will certainly not have its Ack=X+N but most probably, only X. I am not sure if the TCP on the client will react to such situation with RST or with repeated FIN segment. I guess this would be implementation-specific.
Best regards,
Peter
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: