I am trying to wrap my head around exactly what the bandwidth delay product says about TCP performance.
Bandwidth delay product is defined as capacity of a pipe = bandwidth (bits/ sec) * RTT (s) where capacity is specific to TCP and is a bi-product of how the protocol itself operates.
Assume we have a 15Mb pipe with 100 ms of latency. The BW delay product is 15,000,000 * .1 = 1,500,000 b/s.
So this says that theoretically, under optimal conditions, on a 15Mb pipe, a single TCP session cannot consume more then 1.5 Mbps (note theoretical). HOWEVER...
I have also seen the bandwidth delay product compared to the TCP window receive size. In TCP/IP illustrated Stephens seems to suggest that if the bandwidth delay product is larger then the TCP receive window, that the TCP receive window is the limiting factor.
So let's convert our previous example from bits to bytes we get 187,500 Bytes. The default TCP receive window is 65,535 Bytes. So our bandwidth delay product is 2.86 times greater then the TCP receive window size.
Now to my question..
In this scenario, with my current understanding, even if I increase bandwidth, & even if I stretch the pipe out by adding latency, the throughput will never exceed the TCP receive window size. Ergo, the TCP receive window size is the limiting factor.
Is this a correct statement? If not, can you please describe the relationship between the TCP receive window size & the bandwidth delay product.