cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
683
Views
0
Helpful
3
Replies

Slow performance over 1000BaseLH->DWDM->1000BaseLH

bshellrude
Level 1
Level 1

All:

Have a real mind bender here.  We basically have a DWDM circuit connecting the two divisions of our company (about 2200Km's apart).  Recently, people have been complaining of slow performance, so finally we started investigating.  Our MRTG graphs show that this link sustains anywhere from 20-40Mb/s during peak hours, bi-directionally.  The DWDM itself is setup as 622Mb/s.  Basically the connectivity is Cat6509w/sup720, 1000Base-LH->DWDM->1000Base-LH Cat6509w/sup 720.

The DWDM circuit was recently brought down and tested out at 622Mb/s using exfo testsets.

Now here's the weird thing.  Regardless of where we connect (on the switch, from another switch, either side of the firewall, directly accross the link from each end-point switch) and try a transfer (either using tftp, ttcp, ftp, etc...), it seems any and all transfers want to max out at 12Mb/s. Yet obviously, multiple transfers can be run (due to the sustained aggregate rate on the interface) and will all max out at this speed.

Sounds like microflow policing right... but according to the data transport guys, it's all nortel gear on the DWDM, and none of the gear has the ability to traffic-shape/rate, not to mention the fact that the exfo's achieved 622Mb/s....

Thoroughly confused at this point... any help would be appreciated..

Are there any known issues with 1000Base-LH transcievers on WS-6416-GBIC modules, with sup 720's, etc...?

1 Accepted Solution

Accepted Solutions

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Bshellrude,

you have a 2200 km fiber span over DWDM.

propagation delay plays a role here.

>> Regardless of where we connect (on the switch, from another switch,  either side of the firewall, directly accross the link from each  end-point switch) and try a transfer (either using tftp, ttcp, ftp,  etc...), it seems any and all transfers want to max out at 12Mb/s. Yet  obviously, multiple transfers can be run (due to the sustained aggregate  rate on the interface) and will all max out at this speed.

TCP sliding window size if not using extended window (RFC 1323)  is only 65,535 octets.

So one side sends over the TCP session then has to wait for an ACK from receiver.

see

http://www.psc.edu/networking/projects/tcptune/

what counts is the called BW*delay product.

>> Sounds like microflow policing right..

similar but caused by propagation delay.

We have had similar complains from server people on a POS 622 link over a 700 km span, and by using two linux boxes with extended TCP window active performance became far better.

Hope to help

Giuseppe

View solution in original post

3 Replies 3

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Bshellrude,

you have a 2200 km fiber span over DWDM.

propagation delay plays a role here.

>> Regardless of where we connect (on the switch, from another switch,  either side of the firewall, directly accross the link from each  end-point switch) and try a transfer (either using tftp, ttcp, ftp,  etc...), it seems any and all transfers want to max out at 12Mb/s. Yet  obviously, multiple transfers can be run (due to the sustained aggregate  rate on the interface) and will all max out at this speed.

TCP sliding window size if not using extended window (RFC 1323)  is only 65,535 octets.

So one side sends over the TCP session then has to wait for an ACK from receiver.

see

http://www.psc.edu/networking/projects/tcptune/

what counts is the called BW*delay product.

>> Sounds like microflow policing right..

similar but caused by propagation delay.

We have had similar complains from server people on a POS 622 link over a 700 km span, and by using two linux boxes with extended TCP window active performance became far better.

Hope to help

Giuseppe

Thanks for the Reply.

You hit that nail square on the head!!!  Did some tests today, modified window size on two hosts on either side and performed a transfer... immediate and total improvement.

So now to find a graceful solution to fix it globally.

Thanks again!!

Hello,

>> So now to find a graceful solution to fix it globally.

all involved hosts TCP/IP stacks need to be tuned for using extended TCP window

network devices cannot help in this case.

Hope to help

Giuseppe

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco