Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements
Webcast-Catalyst9k
New Member

How to understand serialization delay and BDP?

Hi experts,

I would like to understand serialization delay and BDP more accurately.

I see the following description about serialization delay:

"serialization delay is the time to put digital data onto a tramsmissin line. The formular is: size of packets (bits) / size of packets (bits). For example, to transmit a 1024-byte packet on a 1.544-Mbps T1 line takes about 5ms."

My quesiton is: Is it actually the transmit delay from one end to the other? If that is the case, what is the difference between serialization delay and probagation delay?

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Another one is for BDP (Bandwidth Delay product), I know the formular is: bandwidth * RTT. For example, for a 1Gb/s LAN connection with 1ms RTT, the BDP would be 1Mb/s or 125KB/s. Does it mean I can only achieve a 125KB/s file transfer rate on a 1Gb/s LAN connection? It doesn't make any sense to me because the file transfer on a 1Gbps connection can reach more than 80MB/s.

In addition, does BDP concept only applies to connection-oriented transport? How about connection-less transport, such as UDP?

Thanks a lot!

Everyone's tags (2)
2 ACCEPTED SOLUTIONS

Accepted Solutions

How to understand serialization delay and BDP?

Serialization delay is the time that it takes to serialize a packet, meaning how long time it takes to physically put the packet on the wire. This is dependant on the speed of the physical interface, not subrate if the bandwidth is shaped. In the example you have, a T1 can send 1.544 / 8 = 0.193 MB/s. So to serialize a 1024 byte packet it would take 1024 / 193000 = 5.3 ms.

The propagation delay is how long it takes for a packet to travel. The difference between copper and fiber is not that huge. If you have a distance of 2000km between two cities and a packet is to travel between them. The speed of light in fibre is around 200000 km/s. So it would take 2000 / 200000 = 10ms for the packet to travel one way, meaning that the RTT would be 20ms. In reality if you ping between two devices there would be a bit higher latency due to serialization delay, queueing delay and so on.

BDP is used for TCP to see how much data can be outstanding before it is acknowledged. Because the way TCP works by sending data and waiting for an acknowledgement that means that we can't send at a higher rate than the BDP which is a factor of the connection speed and the RTT. This can be used to calculate the RWIN which is how many bytes can be received before acknowledging the sender. Previously some operating systems had a maximum RWIN of 65 kbyte.

If you have 100 Mbit/s and a RTT of 50 ms between them. Then the BDP is 0.62 MB which is far bigger than the standard 65 kbyte. With the calculator below you can see that with a RWIN of 65 kB, the maximum speed would be around 10 Mbit/s. This is a common reason why people complain about bad TCP throughput and why they often see better result if they use multiple sessions since each session could use around 10 Mbit/s then.

http://en.wikipedia.org/wiki/TCP_tuning

http://www.switch.ch/network/tools/tcp_throughput/

Daniel Dib
CCIE #37149

Please rate helpful posts.

Daniel Dib CCIE #37149 Please rate helpful posts.
Super Bronze

Re: How to understand serialization delay and BDP?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Just to add a little to the info Daniel provided, TCP's RWIN, ideally, is sized for host-to-host bandwidth * RTT, i.e. the BDP.  This will allow TCP to use all the bandwidth optimally.  If RWIN is too small (often the case for LFNs [long fat networks]), TCP will pause transmission waiting for ACKs.  If RWIN is too large, TCP will keep transmitting even though bandwidth between hosts has been filled - this may result in transient device queue/buffer overflows.  In "real" networks, links are often shared with other traffic, so available bandwidth host-to-host varies moment-to-moment.  This can cause TCP to skew it's too small or too large situations.  (Dynamically continuously resizing RWIN, to account for current BDP, is one "trick" used by some traffic shaping appliances to optimize TCP transfer performance between hosts.  Some newer TCP stacks, carefully monitor, with high precision, RTT ACK time to avoid the too large situation.  The latter designed to work with hosts that advertise a too large RWIN.)

As Daniel notes, multiple flows will often transfer data faster than a single flow, because a single host's RWIN is often too small.  However, multiple flows will also transfer faster, especially noticeable over high RTT links, because such multiple flows, operate concurrently.  For example, one flow will slow start or deal with congestion avoidance at 1x, but multiple flows will do the same at nX.  Remember standard TCP will, when in congestion avoidance, only increase its transmission window by one MSS per RTT.  On very large LFN links, a single flow can take hours and hours to regain transmission rate lost to a single packet drop.

4 REPLIES

How to understand serialization delay and BDP?

Serialization delay is the time that it takes to serialize a packet, meaning how long time it takes to physically put the packet on the wire. This is dependant on the speed of the physical interface, not subrate if the bandwidth is shaped. In the example you have, a T1 can send 1.544 / 8 = 0.193 MB/s. So to serialize a 1024 byte packet it would take 1024 / 193000 = 5.3 ms.

The propagation delay is how long it takes for a packet to travel. The difference between copper and fiber is not that huge. If you have a distance of 2000km between two cities and a packet is to travel between them. The speed of light in fibre is around 200000 km/s. So it would take 2000 / 200000 = 10ms for the packet to travel one way, meaning that the RTT would be 20ms. In reality if you ping between two devices there would be a bit higher latency due to serialization delay, queueing delay and so on.

BDP is used for TCP to see how much data can be outstanding before it is acknowledged. Because the way TCP works by sending data and waiting for an acknowledgement that means that we can't send at a higher rate than the BDP which is a factor of the connection speed and the RTT. This can be used to calculate the RWIN which is how many bytes can be received before acknowledging the sender. Previously some operating systems had a maximum RWIN of 65 kbyte.

If you have 100 Mbit/s and a RTT of 50 ms between them. Then the BDP is 0.62 MB which is far bigger than the standard 65 kbyte. With the calculator below you can see that with a RWIN of 65 kB, the maximum speed would be around 10 Mbit/s. This is a common reason why people complain about bad TCP throughput and why they often see better result if they use multiple sessions since each session could use around 10 Mbit/s then.

http://en.wikipedia.org/wiki/TCP_tuning

http://www.switch.ch/network/tools/tcp_throughput/

Daniel Dib
CCIE #37149

Please rate helpful posts.

Daniel Dib CCIE #37149 Please rate helpful posts.
New Member

How to understand serialization delay and BDP?

Really helpful, Daniel. Thanks a lot!!!

Super Bronze

Re: How to understand serialization delay and BDP?

Disclaimer

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

Liability Disclaimer

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

Posting

Just to add a little to the info Daniel provided, TCP's RWIN, ideally, is sized for host-to-host bandwidth * RTT, i.e. the BDP.  This will allow TCP to use all the bandwidth optimally.  If RWIN is too small (often the case for LFNs [long fat networks]), TCP will pause transmission waiting for ACKs.  If RWIN is too large, TCP will keep transmitting even though bandwidth between hosts has been filled - this may result in transient device queue/buffer overflows.  In "real" networks, links are often shared with other traffic, so available bandwidth host-to-host varies moment-to-moment.  This can cause TCP to skew it's too small or too large situations.  (Dynamically continuously resizing RWIN, to account for current BDP, is one "trick" used by some traffic shaping appliances to optimize TCP transfer performance between hosts.  Some newer TCP stacks, carefully monitor, with high precision, RTT ACK time to avoid the too large situation.  The latter designed to work with hosts that advertise a too large RWIN.)

As Daniel notes, multiple flows will often transfer data faster than a single flow, because a single host's RWIN is often too small.  However, multiple flows will also transfer faster, especially noticeable over high RTT links, because such multiple flows, operate concurrently.  For example, one flow will slow start or deal with congestion avoidance at 1x, but multiple flows will do the same at nX.  Remember standard TCP will, when in congestion avoidance, only increase its transmission window by one MSS per RTT.  On very large LFN links, a single flow can take hours and hours to regain transmission rate lost to a single packet drop.

New Member

Re: How to understand serialization delay and BDP?

Very nice addtions, thank you!

1045
Views
0
Helpful
4
Replies
CreatePlease to create content