cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1623
Views
10
Helpful
11
Replies

High ping time inside provider network -MPLS

DialerString_2
Level 3
Level 3

Why are the times so high?! The ISP tells me the ckt fine and we don't have COS setup yet. This ckt is 768k and why would it bottle neck inside the providers network?

G:\>tracert chicago

Tracing route to chicago.com [10.100.18.81]

over a maximum of 30 hops:

1 40 ms <1 ms <1 ms 10.1.21.1

2 15 ms 7 ms 6 ms X.12.12.37 - (My Edge router CE)

3 1146 ms 1306 ms 1193 ms mpls.ip1.net [X.12.12.210]

4 1281 ms 1191 ms 1080 ms mpls.ip2.net [X.12.12.157]

5 1204 ms 1254 ms 1282 ms mpls.ip3.net [X.12.12.120]

6 1328 ms 1405 ms 1540 ms mpls.ip4.net [X.12.12.173]

7 1509 ms 1515 ms 1306 ms mpls.ip5.net [X.12.12.54]

8 37 ms 36 ms 37 ms X.12.12.149 - (Providers PE)

9 1109 ms 1106 ms 1152 ms chicago [10.100.18.81]

Trace complete.

11 Replies 11

Joseph W. Doherty
Hall of Fame
Hall of Fame

It doesn't take much to congest 768 Kbps. Looking at the pings between 2nd and 3rd hops makes me wonder about your circuit load. What's it like? What kind of queuing you using outbound from the CE router? What's CE router's outbound interface drops stats like?

No queuing on this interface, FIFO only. Also I have a COS policy configured on this router but it's applied to my other DS3 that belongs to another provider.

We're not paying to COS with the new provider, YET, which is why I haven't applied the policy to that interface.

The provider needs to configure Class of service for it to work?

If you control the CE router, it's possible you would benefit from other than FIFO queuing, and other QoS features, for output. A provider's QoS support comes more into play either within their cloud and upon cloud egress.

What type of other features Josh?

Other features might be something like WRED, but something other than single FIFO queuing might offer the best benefit. Even something as just WFQ on you outbound interface might make a huge difference. (Reason for "might", don't have enough information to know for sure.)

On your outbound interface, you might try:

policy-map atest

class class-default

fair-queue

interface ?

service-policy output atest

I see. We have one configured already but I haven't applied it b/c I thought the provider needed to get involve.. The provider has a low traffic priority for ICMP traffic, which is why the ping times are so high.

In addition to what has been suggested, as you might know that such packets are handled via the software path (Punt to CPU) and is process-switched so judging the performance/forwarding via Ping/Traceroute should be used just for getting an approximate idea.

The ideal test would be testing the traffic "through" the router to validate the fast-switching/hardware forwarding path which is treated with higher priority and so as to get accurate end-to-end performance baseline..

Even if you configure QoS on ur side, you wouldn't have control of the return pkts from your provider's router which still has to be process switched and will depend on how hot is the CPU running..!!

I got it Josh and thanks for your input.

Eric

Pidoshi, thanks..

Most likely issue is FIFO queuing delay, somewhere. (BTW, if provider is somehow setting a "low priority" for ICMP traffic, depending on what they're doing, you might see totally different results if you change the DSCP marking using extended ping [on a Cisco device].)

Pinku's notes two points (well). One being inaccuracy of ping times based on device load. This is true, although not often (at least in my experience) to the extent of adding 1,000 ms (unless, perhaps, device is running flat out - hopefully not true with provider devices). BTW, if supported, some Cisco devices can be configured as a ping responder for "special" pings which negates much of this issue. (I.e. provide a more accurate ping response time.)

His second point, return path, also falls under the heading of cloud egress (mentioned in one of my prior posts). This is why improving congestion managagment outbound, alone, might not improve your ping times, but it might. Much depends on where the congestion, if any, is. It can be ingress, egress or both.

I added my policy and the ping tims dropped drastically. I started a -t ping before applying the policy and I was getting between 110-330ms. Once I applied the policy it dropped,however it took 30-40 seconds to see a difference.

Thanks for all the post and replies.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: