Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

TCP and Bandwidth


We have a 4mb Frame relay connection to one of our sites. We did some tests to confirm the bandwidth. All shaping/throttling of traffic was turned off and it was done after hours where no other users were using the link.

Sending TCP packets the line maxed out at about 2MB however sending UDP packets the full 4MB was utilised. With TCP we could never get above 2.5 MB max. We monitored the usage using MRTG.

Can someone please explain why this happened?


Re: TCP and Bandwidth


The reason for this is the congestion avoidance process built into TCP. The applications will try and send data at an exponential rate. However, as soon as a TCP segment is dropped, it will drop the transmission rate to half, as part of the slow-start process. As a result, you see this behaviour. If you have multiple TCP streams and use a congestion avoidance mechanims liek WRED, you should observe better utilisation of the link.

UDP has no feedback mechanisms to detect dropped packets and is therefore able to use the entire link.


PS. Pls do remember to rate posts


Re: TCP and Bandwidth

Some of the reduction is based on the latency of the connection (think average ping time) and that TCP has to ACKnowledge blocks of transmissions before continuing the transmissions.

The delay in waiting for the ACK will always reduce the throughput, especially in circuits with higher latencies, but provide a more reliable transmission at layer 4.

Applications that use UDP (like TFTP) basically "firehose" the traffic out to the recipient, but (in the case of TFTP) use a higher layer for throttling or error checking.

In some cases (like VoIP and Video), UDP doesn't have or use a higher-level handshake / ACK / error check ...because by thetime the error or drop is determined, it's too late to re-send the information.

In cases like this there is a mechanism at the receiver's side to compensate. In the case of VoIP, a de-jitter buffer is adjusted and or a process to "smooth out" the missing packet by blending is performed (this applies to streaming video as well).

In your tests, if you bring up multiple endpoints, each using a TCP-based application, you will likely see the full bandwidth being used.

Good Luck


New Member

Re: TCP and Bandwidth


What can make a hugh difference with TCP transmission is whether traffic shaping or traffic policing is used by your telco company to limit your 4meg connection. From what you are saying it appears as if they are using traffic policing, which will simply drop excess packets but doesn't alter the rtt. So TCP just goes through its re-transmission process and then starts to ramp up again because the rtt is low until packets get droped and then it starts the re-transmission/ramp-up again (repeat until file has been sent!). However if traffic shaping is used, excess packets are queued, which increases the rtt, TCP will throttle back due to congestion and will send its stream of packets to match the maximum transmission rate.

I've done a fair bit of work on this exact same issue and had the same results as yourself for a 4meg circuit, we are now looking to deploy traffic shaping through policy maps instead of rate-limiting.

One solution could be for you to deploy traffic shaping at your WAN interfaces, depending upon the topology of your network.

The url will give you a much better idea on the differences between policing and shaping



pls rate posts!