cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
611
Views
49
Helpful
14
Replies

Latency

cisco steps
Level 1
Level 1

hi\

can you help understanding the consequences/impact on system and service performance when latency measures exceed the specified parameters as well as what alternative courses of action can/must be taken to resolve performance impacting latency issues and at what cost

Thanks

14 Replies 14

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Octorbust,

an excessive delay can impact on :

TCP throughput

VoIP voice quality.

Voip requires delay to be limited within 150 ms in the overall path to qualify for quality voice.

(Voip requires also minimal delay variation also known as jitter).

Low speed wan links can take advantage of Link fragmentation and interleaving that allows to make small fragments of large data packets and intermix them with small voip packets.

This can be achieved with multilink PPP or using FRTS.

Different components of delay exist:

serialization delay : time to put the bits on to the wire

propagation delay: caused by distance and finite speed light

processing delay: time spent on queues

the first can be reduced using faster links with clear increased costs

The third using LLQ and LFI (when it is wise to use it on low speed links)

the second component of delay cannot be reduced further.

TCP applications can achieve better perfomances on high speed high delay links using the TCP extended window

see RFC 1323

this requires proper tuning of endpoints

Hope to help

Giuseppe

Thanks Giuseppe

I need to be able to explain all that to circut provider. I have a circut going from usa to asia.. due to the distance I would like to multilink 3 t1 or may be 2 it depend on the latency on each one. with that said.

here is a scenario.

cust call for product . calls goes to asia. asia then access a database in usa (they have to we can not migrate the date overseas) and then provide cust w/ the info , tech in asia are monitor for productivity by the minute..so if data is late it is going to cuase big impact on the operator status due to delay of info, so I am trying to put this togather so I can explain it more in details

Hope its not too much :-)

Thanks Giuseppe

Hello Octorbust,

this looks like very challenging.

to be noted that MLPPP can be used also with a single physical member.

I'm not sure the MLPPP can stay up if the two three links have sensible different delays.

Using extended TCP window should help in improving database response time

Hope to help

Giuseppe

I read all your comments and all seems to be coming from expert and I do appriciate that. I am not in your level of knowlege yet.. but trying to learn...

one more thing that Giuslar mention is Using extended TCP window should help in improving database response time .. is this done on cisco router ? if yet ? how ? or is there document that I can read..

Thank you all for contribution :-)

Keep the network up

When you're dealing with international distances, keep in mind that a major part of latency is often physical distance. Additional bandwidth does not solve this aspect of latency. Depending on your service provider, they might be able to provide the best physical path, to minimize the additional latency, but even the best path might only reduce the overall latency by little.

From what you've described, there won't be much you can do for the voice portion, but with regard to the database portion between Asia and USA, there are a couple of possiblities.

You note Asia has to access a database in the USA and it can not be moved. Is it possible to have replica copy in Asia? If not, there are special WAAS/WAFS devices that might improve access performance. (They often use protocols that deal better with high latency, use special compression, and/or cache information locally.)

Depending on the application, sometimes using a remote terminal application across a WAN that accesses the database (locally) performs better.

Istvan_Rabai
Level 7
Level 7

Hi Guys,

Just to add my knowledge to this:

Sometimes, application developers adore controlling the TCP sending and acknowledging algorithm from the application level.

If so, the delays in TCP acknowledge can be huge: the application allows sending data or acknowledge when it finished with its own processing of data, buffers, etc.

When large database records are transmitted over the wire, this can add up to disproportionately giant transfer delays.

And then the application developers say:

"The application is very slow because the network is poorly designed."

So Ocporburst, be very careful with this. You must make sure the TCP sending and acknowledging works properly, with minimal delays on both sides of the connection. (Generally, 10 ms between receiving a packet and acknowledging it is already too large).

For this, I would suggest to do some testing and packet captures with Wireshark, so you can unambiguously tell what your network is doing and to what extent it is responsible for delays.

If you find big delays as described, tell the application developers to redesign their creature. TCP is a good protocol. They must allow TCP to do its job.

Cheers:

Istvan

"Sometimes, application developers adore controlling the TCP sending and acknowledging algorithm from the application level. "

Truth be known, it's often the opposite. I.e., application developers are tasked with delivering a working application and many don't want to be bothered with issues not related to task completion.

What happens, a developer's workstation is on a LAN running against a near by and lightly loaded development server. When moved into production, on most LANs, application still peforms well (but not always, database scalability design issues often then arise).

If application does work well on LAN, but not on WAN, especially half way around the world, you're 110% correct:

'And then the application developers say:

"The application is very slow because the network is poorly designed." '

Or the variant, "there's something wrong with the network".

TCP is a good protocol, but building applications that work well across WAN is more difficult than doing so across a LAN. It's even more complex than even letting TCP do its job, which is why poor network application performance is so common a complaint for applications running across a WAN.

Oh, and lastly, its often noted many WANs don't provide LAN bandwidth, yet even when they do, it alone may not allow a network application that performs well across a LAN to perform well across a WAN.

Hi Joseph,

You speak from my heart, with the reservation that you can express all this more clearly than me.

Thanks:

Istvan

I read all your comments and all seems to be coming from expert and I do appriciate that. I am not in your level of knowlege yet.. but trying to learn...

one more thing that Giuslar mention is Using extended TCP window should help in improving database response time .. is this done on cisco router ? if yes ? how ? or is there document that I can read..

Thank you all for your contribution :-)

Keep the network up

special WAAS/WAFS devices that might improve access performance. (They often use protocols that deal better with high latency, use special compression, and/or cache information locally.)

is this sometihng that can be done on the router config .. will read about it.. Thanks

Hello Ocporbust,

the implementation of TCP extended window involve the servers and the clients not the routers that just route and forward packets.

see the following example for Win XP

http://www.psc.edu/networking/projects/tcptune/OStune/winxp/winxp_stepbystep.html

By the way Joseph's suggestion on wan application optimizers is a good one.

Hope to help

Giuseppe

ok , Thank you all

Will increasing the receiving host's TCP receive buffer improve database response time?

It depends. It may if the database server is transmitting a large result set. What's large? That varies, but for many client TCP stacks it might be in the range of 30 to 120 KB.

(A well written WAN application, might by design, send segments/pages, for large result sets and request the user to scroll or page for more information. For these, you shouldn't see very large individual data transfers.)

Often the focus where to increase the TCP receive buffer is on the receiving host. Giuseppe's reference shows how to do so for Windows (and also shows the default buffer sizes). However, you may also need to increase the queue sizes on routers too.

The idea size for a receiving host's TCP receive buffer is the BDP (bandwidth delay product). (If the term is new, you should be able to find many references if you search for it.)

I don't recall seeing a recommendation for mapping a router's queue size for BDP, but I think 1/2 of BDP might work.

NB: if queue size on a router is large, you'll likely also want more than just one interface FIFO queue.

Istvan, another truth be known, not so much from the heart, but from doing application development for 20 years. (Oh, and thank you for your compliment.)

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card