Unanswered Question
Feb 9th, 2009

A Customer's end-to-end throughput shows a tcp generated traffic volume does not exceed 30M The customer has bought 400M The Main site is configured for 400M on the Wan int whilst the 3branches are configured for 100M each

All the configuration seem to be ok,No packets lost on any of the interfaces,speed and duplex settings are all 100Full,No QoS,No shaping on any of the interfaces.No ip multicast traffic Default Fifo is used

The test on one of the sites gives 95M whilst to the other 3 not more than 30M

What am I missing

2 of the sites are using 3550 while the other 2 3560


I have this problem too.
1 vote
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)
Giuseppe Larosa Mon, 02/09/2009 - 12:29

Hello Kwame,

performance of TCP over a WAN link are influenced by delay and RTT.

The original TCP specification uses a sliding window up to 65535 bytes.

So high speed link with high delay cannot be used af full rate by a single TCP session.

There are enhancements to TCP original sliding window.

the enhanced window proposed in')">

allows to use a much bigger window and so the adverse effects of delay can be minimized.

operating systems TCP stacks can be tuned according to RFC1323


often windows don't use this option by default.

We have received similar complain between two sites connected with 622 Mbps POS and with a distance of 700 km.

And we gave to server people the same explanation after tuning performance are much better.

Another check you need to perform is the MTU check: verify that an extended ping with packet of IP size 1500 bytes can be sent and received successfully between main site and the other sites.

Hope to help


kevin.shi Tue, 05/12/2009 - 02:36


I got a strange related issue. I can only have 16M TCP test result over 100M LES circuit. while with UDP test it can get 95M. straightfoward config without any QOS settings. Also TCP window has been increased on freeBSD test machine as recommended in

what might be possible issue here?



sultan-shaikh Mon, 02/09/2009 - 18:53

Hi Kwame,

We were able to mitigate this problem by configuring shaping on egress interfaces of the CE routers.

In addition, we also revisited the burst values on the PE routers. The calculation being 2 x RTT x rate



Joseph W. Doherty Thu, 02/12/2009 - 06:44

As Giuseppe notes, if running TCP, a common problem is the receiving host doesn't advertize a large enough receive window to allow a TCP flow to ramp up to full link bandwidth. Ideally you want the receiving host to advertize a window that supports the path's BDP (bandwidth delay product).

Assuming your bottleneck is at the branch, and is 100 Mbps, and the RTT is 100 ms, the BDP is 100,000,000 (bps) * .1 (sec) / 8 (bits/byte) = 1,250,000 (or about 834 1500 byte packets).

Since TCP sends its outbound send window at line rate, you also need to insure network devices can buffer these bursts.

Another issue is TCP slow start. Depending on RTT, it can take some time for TCP to hit full speed. Any drops might force TCP back into slow start or congestion avoidance, the latter being very slow to increase bandwidth.

Besides increasing the receiving host TCP receive window, if doing large bulk data transfers, dedicated transfer applications that split the data transfer into multiple concurrent TCP flows can often utilized full link bandwidth and get to it faster without needing to modify the receiving client.

Another option is usage of some type of transparent WAN acceleration product between the sending and receiving hosts. Many different techniques can be used by them. (One interesting technique is "spoofing" the receiving host's buffer size to control sender's transmission rate.)


One disadvantage of changing the receiving host's TCP window is that the change is often global when you really want it just set for specific TCP flow. If the receiving host's is larger than path's BDP, it will cause the sending host to send bursts larger than the path can accept. If FIFO queuing is in use, the sender might be driven again and again into slow start, causing very poor transfer performance. RED might improve this since the flow will more likely drop into congestion avoidance.

Mohamed Sobair Thu, 02/12/2009 - 10:59

Hi Kwame,

Have you noticed any Performance impact?

Having a dedicated 400M and utilizing only 90M does have a reason. The Normal TCP application behaviour relys on Synchronization. The Global Synchronization behaviour decreases the Window Siz of the TCP session by half once a TCP Synch hits the maximum available Bandwidth and starts synchronization back by requesting the sender to increase its window size and so on and so forth.

I would say, this could be the reason if you have fully utilize ur pipe, or at least having an unstable utilization.

Could u please clarify more whats the current affect? and does it affect ur current TCP application? Do u have performance impact?



Kwamet Tue, 03/24/2009 - 03:43

Thank you ALL for your rich inputs.Addressing the Default size windows XP of 64K helped because a 100 Mbit with 5 ms delay requires at least 62500 bytes size TCP recieve window

was helpful

All is well Customer satisfied

Giuseppe Larosa Tue, 03/24/2009 - 06:28

Hello Kwame,

nice news that you solved this issue.

It was a TCP window problem as suggested

Hope to help


shivlu jain Wed, 03/25/2009 - 22:21

Hi Kwame

I have pipe of DS3 on my laptop and make all the changes in RWND but the download and upload speed is not inceasing from 2.5 mbps. Could you tell me what you made changes at your end.


shivlu jain

shivlu jain Fri, 03/27/2009 - 22:42

Hi Kwamet

This is quite good link. My concern is that being a SP how we can ask our customers to change the settings on all PC. Everytime customer shouts like hell and complains for the low speed. The trick is quite good for a single pc but not for all.

Do you have any idea how to increase the perfomance of lan computers which are accessing internet wit pipe of ds3 or more.


shivlu jain

Joseph W. Doherty Sat, 03/28/2009 - 05:11

You might use a device sits in-line that "spoofs" the TCP connection and adjusts TCP receive window. I don't think Cisco has any in their product line, but believe other vendors do.

kevin.shi Tue, 05/12/2009 - 02:38


I got a strange related issue. I can only have 16M TCP test result over 100M LES circuit. while with UDP test it can get 95M. straightfoward config without any QOS settings. Also TCP window has been increased on freeBSD test machine as recommended in

what might be possible issue here?



Joseph W. Doherty Tue, 05/12/2009 - 03:18

Possible issues include any packet drops during the TCP ramp-up and/or latency that slows the ramp-up and/or recovery.

If you needed to expand the receiving host's TCP receiving window size to match BDP, you also might need to expand network device queue sizes.


This Discussion