High ping time inside provider network -MPLS

Unanswered Question
Jul 17th, 2009
User Badges:
  • Bronze, 100 points or more

Why are the times so high?! The ISP tells me the ckt fine and we don't have COS setup yet. This ckt is 768k and why would it bottle neck inside the providers network?

G:\>tracert chicago

Tracing route to chicago.com []

over a maximum of 30 hops:

1 40 ms <1 ms <1 ms

2 15 ms 7 ms 6 ms X.12.12.37 - (My Edge router CE)

3 1146 ms 1306 ms 1193 ms mpls.ip1.net [X.12.12.210]

4 1281 ms 1191 ms 1080 ms mpls.ip2.net [X.12.12.157]

5 1204 ms 1254 ms 1282 ms mpls.ip3.net [X.12.12.120]

6 1328 ms 1405 ms 1540 ms mpls.ip4.net [X.12.12.173]

7 1509 ms 1515 ms 1306 ms mpls.ip5.net [X.12.12.54]

8 37 ms 36 ms 37 ms X.12.12.149 - (Providers PE)

9 1109 ms 1106 ms 1152 ms chicago []

Trace complete.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (2 ratings)
Joseph W. Doherty Sat, 07/18/2009 - 16:01
User Badges:
  • Super Bronze, 10000 points or more

It doesn't take much to congest 768 Kbps. Looking at the pings between 2nd and 3rd hops makes me wonder about your circuit load. What's it like? What kind of queuing you using outbound from the CE router? What's CE router's outbound interface drops stats like?

DialerString_2 Mon, 07/20/2009 - 08:13
User Badges:
  • Bronze, 100 points or more

No queuing on this interface, FIFO only. Also I have a COS policy configured on this router but it's applied to my other DS3 that belongs to another provider.

We're not paying to COS with the new provider, YET, which is why I haven't applied the policy to that interface.

The provider needs to configure Class of service for it to work?

Joseph W. Doherty Mon, 07/20/2009 - 08:32
User Badges:
  • Super Bronze, 10000 points or more

If you control the CE router, it's possible you would benefit from other than FIFO queuing, and other QoS features, for output. A provider's QoS support comes more into play either within their cloud and upon cloud egress.

Joseph W. Doherty Mon, 07/20/2009 - 09:52
User Badges:
  • Super Bronze, 10000 points or more

Other features might be something like WRED, but something other than single FIFO queuing might offer the best benefit. Even something as just WFQ on you outbound interface might make a huge difference. (Reason for "might", don't have enough information to know for sure.)

On your outbound interface, you might try:

policy-map atest

class class-default


interface ?

service-policy output atest

DialerString_2 Mon, 07/20/2009 - 10:35
User Badges:
  • Bronze, 100 points or more

I see. We have one configured already but I haven't applied it b/c I thought the provider needed to get involve.. The provider has a low traffic priority for ICMP traffic, which is why the ping times are so high.

pidoshi Mon, 07/20/2009 - 11:03
User Badges:
  • Cisco Employee,

In addition to what has been suggested, as you might know that such packets are handled via the software path (Punt to CPU) and is process-switched so judging the performance/forwarding via Ping/Traceroute should be used just for getting an approximate idea.

The ideal test would be testing the traffic "through" the router to validate the fast-switching/hardware forwarding path which is treated with higher priority and so as to get accurate end-to-end performance baseline..

Even if you configure QoS on ur side, you wouldn't have control of the return pkts from your provider's router which still has to be process switched and will depend on how hot is the CPU running..!!

DialerString_2 Mon, 07/20/2009 - 11:07
User Badges:
  • Bronze, 100 points or more

I got it Josh and thanks for your input.


Joseph W. Doherty Mon, 07/20/2009 - 11:55
User Badges:
  • Super Bronze, 10000 points or more

Most likely issue is FIFO queuing delay, somewhere. (BTW, if provider is somehow setting a "low priority" for ICMP traffic, depending on what they're doing, you might see totally different results if you change the DSCP marking using extended ping [on a Cisco device].)

Pinku's notes two points (well). One being inaccuracy of ping times based on device load. This is true, although not often (at least in my experience) to the extent of adding 1,000 ms (unless, perhaps, device is running flat out - hopefully not true with provider devices). BTW, if supported, some Cisco devices can be configured as a ping responder for "special" pings which negates much of this issue. (I.e. provide a more accurate ping response time.)

His second point, return path, also falls under the heading of cloud egress (mentioned in one of my prior posts). This is why improving congestion managagment outbound, alone, might not improve your ping times, but it might. Much depends on where the congestion, if any, is. It can be ingress, egress or both.

DialerString_2 Mon, 07/20/2009 - 12:53
User Badges:
  • Bronze, 100 points or more

I added my policy and the ping tims dropped drastically. I started a -t ping before applying the policy and I was getting between 110-330ms. Once I applied the policy it dropped,however it took 30-40 seconds to see a difference.

Thanks for all the post and replies.


This Discussion