ip tcp window-size command on the router

Unanswered Question
Nov 18th, 2008

Hi All,

We have 2 MPLS PE nodes(Cisco 7600). The Server1 is connected on PE1 and Server 2 is connected on PE2.We have a latency of around 20 ms between these two PE nodes. When we tried to do data transfer from Server 1 to Server 2 by means of FTP,we could acheive some were around 20Mbps has peak utilization for that FTP session. Then we understood the BDP(Bandwidth delay product) comes into issue and the throughput depends on the latency. So inorder to increase the throughput,we try to increase the server 1 and server 2 window buffer sizes to more than 64Kbytes,but we couldn't see much differnce in the throughput.

Meanwhile we are tring to understand the ip tcp window-size command on the PE router and does this has any relation to the above scenario.Also what this command really does for the router and if it is not set,whats the default value on it.Any help would be really appreciated.

Also, when we increased the buffer sizes from 32k to 64k ,there are significant increase in the throughput but not when we increased above 64k.

Thanks

Regards

Anantha Subramanian Natarajan

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Average Rating: 3.8 (10 ratings)
royalblues Tue, 11/18/2008 - 11:46

I dont think modifying the TCP window size on the router would increase the throughput.

The default tcp window size on the routers is 4128 bytes (without windows scaling)

The window sizes would be negotiated between the end systems.

To increase the window sizes to more than 64K, you need to enable window scaling. Also try a UDP transfer which will result in higher throughput

Narayan

anasubra_2 Tue, 11/18/2008 - 11:56

Hi Narayan,

Thank you very much ...... UDP transfer is been tested and the peak utilization can be pushed upto nearly the pipe bandwidth as you were asking.

Assuming the window size is been negotaited between the end systems,what is the use of this router command and is it for TCP connections originated and destined to the router ?

Also ,I have an another question,is there a chance,the routers overwrites window size set by the servers ?

Once again thanks for your response

Regards

Anantha Subramanian Natarajan

royalblues Tue, 11/18/2008 - 12:16

i think configuring the TCP window sizes will affect the traffic destined to the router.

You can test it anyway by enabling windows scaling and doing the transfer. All you need to do is to define the TCP window size to eb higher than 65535.

Under normal circumstances the router is not intelligent enough to override the TCP window size for the transit traffic. This is actually what is done in the WAN optimisers which considerabley improves the throughput across a link by modifying the window size but it is never a requirement to adjust the windows sizes for all the network devices in between them .

HTH

Narayan

anasubra_2 Tue, 11/18/2008 - 12:20

Hi Narayan,

Thank you very much for the reply. It seems the routers tcp window size doesn't give option more than 65535. Is there any command to be added on router to provide option more than 65535 ?

Thanks

Regards

Anantha Subramanian Natarajan

royalblues Tue, 11/18/2008 - 12:27

Anantha,

You shoudl be able to configure a window size greater than 65535. you cannot configure a scaling factor directly but when u use a value more than 65535 scaling is automatically enabled

http://www.cisco.com/en/US/docs/ios/12_2t/12_2t8/feature/guide/tcpwslfn.html

The ethereal trace mostly shows the normal window size + a scale factor in the options and not the newly calculated that you would get multiplying them together.

Narayan

anasubra_2 Tue, 11/18/2008 - 12:34

Hi Narayan,

Thanks for your quick response. It seems the current code we are running 12.2(18)SXF4 doesn;t support TCP scaling. When I searched from the IOS feature navigator,it doesn;t highlight this code for this feature and also in the router when i do ip tcp window-size <0 - 65535 bytes>, it provides this range . If I miss something,kindly let me know .

Thanks

Regards

Anantha Subramanian Natarajan

royalblues Tue, 11/18/2008 - 12:41

yes mostly seems a IOS issue.

You can try using another IOS but i still do not feel its gonna increase any throughput for your transfers

cheers

Narayan

JosephDoherty Tue, 11/18/2008 - 16:38

If the host's TCP receive buffer is less than BDP, sender will not be able to utilize path's maximum bandwidth. Sender will keep stopping tranmission while awaiting ACKs.

If host's TCP receive buffer is larger than BDP, you won't see any improvement in thoughput since receive buffer, when sized for BDP, should support 100% utilization, but you might see a reduction in performance if sender sends too much which results in packet drops within network devices along the path.

Often overlooked, network device queues may also need to be sized to support BDP, otherwise you may encounter early packet drops while flow is attempting to ramp up to maximum bandwidth.

You didn't note minimum bandwidth between servers, but if it were, for example, 100 Mbps with your noted 20 ms latency, the BDP is 250,000 bytes (assuming my math is correct). Assuming TCP packets are 1500 bytes (1460 MSS), a queue size of 171 packets might be needed on routers. Anything more or less, especially less, would/could preclude optimal performance.

Also don't forget TCP slow start, as the BDP increases, it takes TCP longer to ramp up. Further, any packet loss takes longer for TCP to recover its maximum flow rate.

Any, all, or some combination of the prior factors might account for why you didn't see an increase in performance when you tried a receive buffer larger than 64 KB. Also, remember scaled received buffers is a later TCP option; both hosts (sender and receiver) have to agree to use it.

On the issue of router's "ip tcp window-size", my understanding is that to set the router's TCP receive window when it's acting as a host. Little on the router would see an improvement when it's adjusted except perhaps sending a file to the router with something like FTP, although if writing to flash, the flash is likely to be the bottleneck.

PS:

Other than an optimization appliance to improve TCP performance when dealing with larger BDPs, perhaps the easiest method to improve performance is to send data via multiple concurrent TCP flows. Multiple flows deal much better with bandwidth ramp up and packet loss bandwidth recovery.

anasubra_2 Tue, 11/18/2008 - 17:20

Hi Josephdoherty,

Awesome response,thank you very much ......One thing,which I couldn't understand is about tuning queue size required on routers,how that can be evaluated on the routers ?.

Thank you very much

Regards

Anantha Subramanian Natarajan

JosephDoherty Tue, 11/18/2008 - 17:45

re: Tuning the router queue size (for worst case):

First compute the BDP across the network (end-to-end), not just to next hop. (Basically the same BDP you would compute for the receiving host).

Divide BDP by payload portion of expected packets (if standard Ethernet, 1460). That will provide your needed queue size to absorb maximum TCP transmission window.

NB: You'll only need to really tune the queue where there's a bandwidth reduction. Often this is at the WAN router WAN link egress interface. The larger the delta between the source sending bandwidth and the WAN bandwidth, tuning the WAN queue becomes more important. If the bandwidth delta is small, little tuning, or none, may be needed. (I don't have calculation to compute how much of the worst case queue allocation you'll need.)

To make this clearer, assume you had 10 Mbps WAN via sat. If the sender (on the LAN) was sending at 10 Mbps, you don't need to adjust WAN queue. If the sender (on the LAN) was sending at gig, packets will queue up at the WAN interface. Assuming sender's transmission window is BDP, router's queue will need to queue much/most of it because it arrives 100x faster than the router sends.

PS:

As my prior post shows, BDP and needed queue depth can get large, so if there are other flows sharing the link, try to insure everything isn't placed into one huge FIFO queue.

anasubra_2 Wed, 11/19/2008 - 05:09

Hi Joseph,

Thank you ....Actually in our case WAN link is 10 Gig circuit and the LAN is 1 Gig . So,I would assume ,queuing is not a problem right ?

Thanks

Regards

JosephDoherty Wed, 11/19/2008 - 05:41

10 gig WAN - don't see many of those!

1 gig to 10 gig, won't be a problem for single host. However, I didn't mention the issue of mutiple hosts.

Assuming the hosts have gig links, but there's 10 gig or more bandwidth leading to your WAN router, we could see congestion at the aggregation bottleneck. You'll need to consider that multiple hosts could transmit at the same instant and so you'll need to allow for sufficient queue space at the aggregation bottleneck. Worst case, queue would need to support path's BDP.

What you have to keep in mind, standard TCP at the host doesn't "meter" its transmission rate like a shaper would, it sends at full bandwidth all the packets in its send window (NB: send window won't exceed receiver's receive window). When standard TCP adjusts its send rate, it really is adjusting the size of its send window. This is why data traffic is "bursty", which is one reason we need to queue.

If TCP gets up to full rate, and BDP is optimal, it will "self clock" sending new packets as returning ACKs release them. At that point you shouldn't see any queuing on the router (assuming single flow) beyond a packet or two (assuming one ACK per two sent packets).

anasubra_2 Wed, 11/19/2008 - 07:17

Hi Joseph,

Thank you very much ....Do you know the command to determine the queue size available on the router ?

Thanks

Regards

Anantha Subramanain Natarajan

JosephDoherty Wed, 11/19/2008 - 07:46

Seeing queue allocation varies. On interfaces running simple FIFO, show interface should show it.

e.g.

FastEthernet0/1 is up, line protocol is up

.

.

Queueing strategy: fifo

Output queue: 0/40 (size/max)

But for other queuing, you might need to use other commands.

e.g.

ATM0/IMA0 is up, line protocol is up

.

.

Queueing strategy: Per VC Queueing

For the ATM example, there's an attached service policy, and various queues attached to its classes.

anasubra_2 Wed, 11/19/2008 - 08:02

Hi Joseph,

Thank you very much

Regards

Anantha Subramanian Natarajan

Actions

Login or Register to take actions

This Discussion

Posted November 18, 2008 at 11:35 AM
Stats:
Replies:18 Avg. Rating:3.75
Views:3920 Votes:0
Shares:0
Tags: No tags.

Discussions Leaderboard