Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

New Member

MPLS VPNs - Latency

Hello All,

I have a MPLS VPN setup for one of my sites. We have a 10M pipe (Ethernet handoff) from the MPLS SP, and it is divided into 3 VRFs.

6M - Corp traffic

2M - VRF1

2M - VRF2

The users are facing lot of slowness while trying to access application on VRF1. I can see the utilization on the VRF1 is almost 60% of it's total capacity (2M). Yesterday when trying to ping across to the VRF1 Peer in the MPLS cloud, I was getting a Max response time of 930ms.

xxxxx#sh int FastEthernet0/3/0.1221

FastEthernet0/3/0.1221 is up, line protocol is up

  Hardware is FastEthernet, address is 503d.e531.f9ed (bia 503d.e531.f9ed)

  Description: xxxxx

  Internet address is x.x.x.x/30

  MTU 1500 bytes, BW 2000 Kbit, DLY 1000 usec,

     reliability 255/255, txload 71/255, rxload 151/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  1221.

  ARP type: ARPA, ARP Timeout 04:00:00

  Last clearing of "show interface" counters never

I also see a lot of Output drops on the physical interface Fa0/3/0. Before going to the service provider, can you please tell me if this can be an issue with the way QoS is configured on these VRFs?

xxxxxxx#sh int FastEthernet0/3/0 | inc drops

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3665

Appreciate your help.

Thanks

Mikey

1 ACCEPTED SOLUTION

Accepted Solutions

Re: MPLS VPNs - Latency

Hi Mikey,

1) Will output drops also contribute to the latency here?

Yes, output drops do cause packet loss hence re-transmits causing latency etc.

2) I can try and enable IP accounting on that sub-interface (VRF) and see the load. Thoughts?

IP accounting helps you know which host is causing the overload or offending  the link etc.

I would defintely recommend graphing the interfaces and it would give you a good idea.

3) As you said, if the 2M gets maxed out I would see latency as the shaper is getting fully utilized. But I don't see that on the interface load as mentioned above? I have pasted the ping response during the time load output was taken. I can;t read much into the policy map output, but does it talk anything about 2M being fully utilized and hence packets getting dropped.

The packets dont get dropped but queued thats why shapers are good.  policers on the other hand drop packets.

shapers just queue them (buffer) and caust latency but not drop them as that would cause TCP re-transmits.

HTH

Kishore

rate if helps and mark correct if answered

15 REPLIES
New Member

MPLS VPNs - Latency

Appreciate if someone replies on this.

Thanks

Mikey

Cisco Employee

MPLS VPNs - Latency

Mikey,

It is possible that these issues can be caused by your QoS policy. Can you show us the configuration of the Fa0/3/0.1221 and Fa0/3/0 interfaces?

Also please try to visit the following document for more information:

http://www.cisco.com/en/US/products/hw/routers/ps133/products_tech_note09186a0080094791.shtml#topic4

Best regards,

Peter

New Member

MPLS VPNs - Latency

Hi Peter,

The interface and Policy map configs are mentioned below.

interface FastEthernet0/3/0

description xxxx

bandwidth 10000

no ip address

ip flow ingress

ip flow egress

duplex full

speed 10

end

interface FastEthernet0/3/0.1221

description VRF1

bandwidth 2000

encapsulation dot1Q 1221

ip vrf forwarding XYZ

ip address x.x.x.x 255.255.255.252

ip accounting output-packets

ip flow ingress

ip flow egress

service-policy output ABC

end

policy-map ABC

class class-default

  shape average 2000000

Thanks

Mikey

Cisco Employee

MPLS VPNs - Latency

Hello Mikey,

I apologize for being late here. Can you please post the output of show policy-map interface fa0/3/0.1221 command? Thank you!

Best regards,

Peter

New Member

MPLS VPNs - Latency

Thanks Peter. The output is shown below.

xxxxxxx#sh policy-map interface fa0/3/0.1221

FastEthernet0/3/0.1221

  Service-policy output: ABC

    Class-map: class-default (match-any)

      27517 packets, 9676419 bytes

      5 minute offered rate 6000 bps, drop rate 0 bps

      Match: any

      Traffic Shaping

           Target/Average   Byte   Sustain   Excess    Interval  Increment

             Rate           Limit  bits/int  bits/int  (ms)      (bytes)

          2000000/2000000   12500  50000     50000     25        6250

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping

        Active Depth                         Delayed   Delayed   Active

        -      0         27517     9676419   238       321248    no

Thanks

Mikey

Cisco Employee

Re: MPLS VPNs - Latency

Hello Mikey,

Hmm, these statistics are completely correct - no huge drops, no huge amounts of delayed packets. It seems as if this shaper did not have much to do.

Can you post the complete output of the show interface Fa0/3/0? I am interested in seeing all the counters. Is there any other traffic (via a different subinterface) flowing through this physical interface?

Best regards,

Peter

New Member

MPLS VPNs - Latency

Hi Peter,

I see a lot of Output drops on the physical Interface as per the output below. There is no other traffic flowing through fa0/3/0.1221 as it is in a different VRF.

xxx#sh int fa0/3/0

FastEthernet0/3/0 is up, line protocol is up

  Hardware is FastEthernet, address is 503d.e531.f9ed (bia 503d.e531.f9ed)

  Description: xxxxxxxxx

  MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec,

     reliability 255/255, txload 61/255, rxload 93/255

  Encapsulation 802.1Q Virtual LAN, Vlan ID  1., loopback not set

  Keepalive set (10 sec)

  Full-duplex, 10Mb/s, 100BaseTX/FX

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input 00:00:00, output 00:00:00, output hang never

  Last clearing of "show interface" counters 01:34:32

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 3392

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 3674000 bits/sec, 694 packets/sec

  5 minute output rate 2412000 bits/sec, 1099 packets/sec

     4522320 packets input, 3245065176 bytes

     Received 0 broadcasts, 0 runts, 0 giants, 0 throttles

     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

     0 watchdog

     0 input packets with dribble condition detected

     6621868 packets output, 1830822500 bytes, 0 underruns

     0 output errors, 0 collisions, 0 interface resets

     0 babbles, 0 late collision, 0 deferred

     0 lost carrier, 0 no carrier

     0 output buffer failures, 0 output buffers swapped out

Thanks

Mikey

MPLS VPNs - Latency

Hi Mikey,

Do you graph the bandwidith in a NMS or something. The rate in your last output shows that its pushing about 2.4Mbps for the entire interface, However, this is a 5minutes average. There could be occasional spikes where the bandwidth almost went above the total speed which is 10M in your case.

Also, another thing. you have mentioned the speed to be 10M on the parent interface. However your entire bandwidth from the SP is 10M as well. Normally you would have the physical interface do a bit more. Is it possible for you to change the physical interface speed to 100M. I am sure it wont affect anything. Just want to see if the output interface drops disappear( I am sure it will )

I guess you are pumping  more over all and the individual vrf's are getting congested as you could see from the bytes delayed. Bytes delayed indicate congestion on the interface. Try monitoring the overall BW transmitted by graphing the traffic and that will give u a good indication.

Hope this gives some insight.

Kishore

New Member

MPLS VPNs - Latency

Hi Kishore,

Right now the graphing for these interfaces is disabled, so Iam unable to pull out any data. But, on the physical interface the utilization does not go beyond 60% normally.

We have 3 VRFs defined here. All the normal user traffic for accessing corporate apps goes through one VRF, and rest two are for accessing apps over DMZ and other stuff.

VRF1 - 6M

VRF2- 2M ---> where the latency is

VRF 3 - 2M.

These VRFs have been assigned individual BW from the SP end. I fail to understand how utilization overall can affect when specific sub-interface (VRF)? Please elaborate.

The speed 10 setting was given to us by the SP, and Iam not sure if changing that would bring the link down?

Thanks

Mikey

MPLS VPNs - Latency

Hi Mikey,

So the output drops you are seeing is a result of the data pushed via that interface>the speed of the interface. So, when you change the interface speed to > 10 i.e 100Mbps you wil see tat these would disappear.

"The speed 10 setting was given to us by the SP, and Iam not sure if changing that would bring the link down?"

well changing the speed might bring the link down if their side is hard coded or maybe you want to try auto neg to see what they have configured. Anyway in short  I wanted to point out the cause of the output drops.

"These VRFs have been assigned individual BW from the SP end. I fail to understand how utilization overall can affect when specific sub-interface (VRF)? Please elaborate"


What I meant was if all the VRF's were pumping data to their max then you would see that the packets get delayed. When the respective shapers gets full it starts to enqueue packets and hence the slowness or latency.

If only one sub-int is getting maxed out then its purely because the 2M is being fully used and has maxed out. Doesnt matter if others sub-interfaces are sending traffic or not.  you have configured the shaper for 2M and you have fully utilised it.

does this make sense?

Please feel free to ask more.

HTH

Kishore

New Member

MPLS VPNs - Latency

Hi Kishore,

Thanks for the clarification. Let me speak to the service provider and see if we can sort out the Output drops issue.

I had a few more queries.

1) Will output drops also contribute to the latency here?

2) The show int fa0/3/0.1221 output below only shows the load on the physical interface (fa0/3/0) and not of that particuar interface.Right?

xxxxxx#sh int fa0/3/0.1221 | inc load

     reliability 255/255, txload 49/255, rxload 94/255

xxxxx#sh int fa0/3/0 | inc load

     reliability 255/255, txload 49/255, rxload 94/255

I can try and enable IP accounting on that sub-interface (VRF) and see the load. Thoughts?

3) As you said, if the 2M gets maxed out I would see latency as the shaper is getting fully utilized. But I don't see that on the interface load as mentioned above? I have pasted the ping response during the time load output was taken. I can;t read much into the policy map output, but does it talk anything about 2M being fully utilized and hence packets getting dropped.

xxxxxxx#ping vrf ABC x.x.x.x re 1000

Type escape sequence to abort.

Sending 1000, 100-byte ICMP Echos to x.x.x.x, timeout is 2 seconds:

!!!!.!..!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

!!!!!!!!!!!!!!!!!!!!

Success rate is 99 percent (997/1000), round-trip min/avg/max = 12/216/1972 ms

xxxx#sh policy-map interface fa0/3/0.1221

FastEthernet0/3/0.1221

  Service-policy output: ABC

    Class-map: class-default (match-any)

      114998 packets, 36909265 bytes

      5 minute offered rate 11000 bps, drop rate 0 bps

      Match: any

      Traffic Shaping

           Target/Average   Byte   Sustain   Excess    Interval  Increment

             Rate           Limit  bits/int  bits/int  (ms)      (bytes)

          2000000/2000000   12500  50000     50000     25        6250

        Adapt  Queue     Packets   Bytes     Packets   Bytes     Shaping

        Active Depth                         Delayed   Delayed   Active

        -      0         114998    36909265  1667      2329112   no

Thanks

Mikey

Re: MPLS VPNs - Latency

Hi Mikey,

1) Will output drops also contribute to the latency here?

Yes, output drops do cause packet loss hence re-transmits causing latency etc.

2) I can try and enable IP accounting on that sub-interface (VRF) and see the load. Thoughts?

IP accounting helps you know which host is causing the overload or offending  the link etc.

I would defintely recommend graphing the interfaces and it would give you a good idea.

3) As you said, if the 2M gets maxed out I would see latency as the shaper is getting fully utilized. But I don't see that on the interface load as mentioned above? I have pasted the ping response during the time load output was taken. I can;t read much into the policy map output, but does it talk anything about 2M being fully utilized and hence packets getting dropped.

The packets dont get dropped but queued thats why shapers are good.  policers on the other hand drop packets.

shapers just queue them (buffer) and caust latency but not drop them as that would cause TCP re-transmits.

HTH

Kishore

rate if helps and mark correct if answered

New Member

Re: MPLS VPNs - Latency

Thanks for your inputs Kishore. Pretty helpful.

I had raised a ticket with the SP, and they came back saying that latency/Packet drops is due to high utilization.

I had enabled graphing on the VRF1, and could see that the utilization has not crossed more than 1.2M (out of the 2M alloted). But during those times, when I ping across the VRF peer in the MPLS cloud, I still get a latency of about 800ms. Why is that so? LInk issue?

From the graph, I noticed that the utilization on other VRF2 (6M) primarily used for corporate/Internet traffic is about 85-90%. Can this heavy utilization on VRF2 affect VRF1 (since they are part of the same physical interface Fa0/3/0) and have a total pipe of 10M from SP.

Or can it be a link issue?

Appreciate your inputs again.

Thanks

MIkey

New Member

Re: MPLS VPNs - Latency

Can anyone help me with the query in the post above?

Thanks

Mikey

MPLS VPNs - Latency

hi mikey,

sorry for the late reply. the short answer is it could be both. perhaps the link could be bad. When the 10M pipe is full you will exprience congestion and hence latency. what i mean is there is no Qos or prioritisation in place right? so all the traffic wil be FIFO(first in first out)  which is default on ethernet media. so if your interface queue is already full then you will see that delay.

Have i answered your question?  Please feel free to ask more if im not clear

Kishore

1552
Views
0
Helpful
15
Replies
CreatePlease login to create content