policy-map on tunnel or physical interface?

Answered Question
Apr 28th, 2008
User Badges:

Hi all,


I have a 3800 headend router which has a number of ipsec tunnels to remote office sites. Our current QoS design applies a policy-map to each tunnel interface to prioritise and shape outbound traffic.


My question is how does the physical egress interface queue and transmit traffic from tunnel interfaces with this design? For example, if a mixture of large data packets and voice packets from different tunnel interfaces hit the physical interface around the same time what will happen to the voice packets?


Furthermore, would it be a better to apply the policy-map to the physical interface instead of the tunnel interfaces? What advantages if any would this bring?


Many thanks.

Correct Answer by Joseph W. Doherty about 9 years 3 weeks ago

If you're shaping each tunnel to the outbound physical bandwidth, yes it would be better to just have the policy, without any shaping, on the physical interface. Again, you'll will either need to depend on a copied ToS value in the outbound packet or use qos pre-classify. (A single physical policy would be much like your QUEUE_DATA if using qos pre-classify.)


e.g.


!assumes qos-preclassify


interface Ethernet0

service-policy output QUEUE_DATA


What I thought you might be doing, and you could also do, was shape each tunnel to the far side's ingress bandwidth. This would require a distinct policy, if the shaper values change, for every tunnel interface, or a policy on the physical interface that has a class per tunnel (matches against tunnel destination address).


e.g.


!assume local outbound interface not oversubscribed


policy-map NESTED_QOS_512K

class class-default

shape average 512000

service-policy QUEUE_DATA


policy-map NESTED_QOS_768K

class class-default

shape average 768000

service-policy QUEUE_DATA


policy-map NESTED_QOS_1500K

class class-default

shape average 1500000

service-policy QUEUE_DATA


interface Tunnel1

service-policy output NESTED_QOS_786K


interface Tunnel2

service-policy output NESTED_QOS_512K


interface Tunnel3

service-policy output NESTED_QOS_1500K


interface Tunnel4

service-policy output NESTED_QOS_512K


e.g.


!assume local outbound interface not oversubscribed


class-map match-all Tunnel1

match group (ACL that matches tunnel1 destination address)


class-map match-all Tunnel2

match group (ACL that matches tunnel2 destination address)


policy-map outbound_tunnels


class Tunnel1

shape average 768000

service-policy output QUEUE_DATA


class Tunnel2

shape average 512000

service-policy output QUEUE_DATA


Interface Ethernet 0

service-policy outbound outbound_tunnels


If all the far side bandwidths exceed your local outbound physical bandwidth, then you should have both tunnel policies, that shape each tunnel, and a physical interface policy.


e.g.

!assume local outbound interface is oversubscribed


policy-map NESTED_QOS_512K

class class-default

shape average 512000

service-policy QUEUE_DATA


policy-map NESTED_QOS_768K

class class-default

shape average 768000

service-policy QUEUE_DATA


policy-map NESTED_QOS_1500K

class class-default

shape average 1500000

service-policy QUEUE_DATA


interface Tunnel1

service-policy output NESTED_QOS_786K


interface Tunnel2

service-policy output NESTED_QOS_512K


interface Tunnel3

service-policy output NESTED_QOS_1500K


interface Tunnel4

service-policy output NESTED_QOS_512K


!assumes qos-preclassify


interface Ethernet0

service-policy output QUEUE_DATA

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (2 ratings)
Loading.
Joseph W. Doherty Tue, 04/29/2008 - 16:21
User Badges:
  • Super Bronze, 10000 points or more

Should depend whether your tunnel shapers' oversubscribe the physical interface. If so (i.e. you oversubscribe), then you could get congestion at the physical interface which would be handled by its default queuing for that interface type. If this is the case, you would need a policy on the physical interface to prioritize important tunnel traffic. (This is in addition to the tunnel policy.) Selecting the correct packets to prioritize could be based on the original packet's ToS copied to the tunnel packet's ToS, or via usage of qos pre-classify.


steve_mils Wed, 04/30/2008 - 00:53
User Badges:

Each tunnel uses the same nested policy, shaped to the speed of the physical internet link (10Mbps):


policy-map QUEUE_DATA

class Voice

priority percent 15

class Interactive_Video

priority percent 20

class Call_Signaling

bandwidth percent 4

class Network_Control

bandwidth percent 4

class Critical_Data

bandwidth percent 27

random-detect dscp-based

class Bulk_Data

bandwidth percent 4

random-detect dscp-based

class Scavenger

set dscp cs1

class class-default

bandwidth percent 25


policy-map NESTED_QOS

class class-default

shape average 10000000

service-policy QUEUE_DATA


interface Tunnel0

description Remote Site X

service-policy output NESTED_QOS

!


interface Tunnel1

description Remote Site Y

service-policy output NESTED_QOS

!


interface Tunnel2

description Remote Site Z

service-policy output NESTED_QOS

!


Would it be more efficient / effective to apply the policy-map to the physical interface instead of the tunnels?

Correct Answer
Joseph W. Doherty Wed, 04/30/2008 - 03:30
User Badges:
  • Super Bronze, 10000 points or more

If you're shaping each tunnel to the outbound physical bandwidth, yes it would be better to just have the policy, without any shaping, on the physical interface. Again, you'll will either need to depend on a copied ToS value in the outbound packet or use qos pre-classify. (A single physical policy would be much like your QUEUE_DATA if using qos pre-classify.)


e.g.


!assumes qos-preclassify


interface Ethernet0

service-policy output QUEUE_DATA


What I thought you might be doing, and you could also do, was shape each tunnel to the far side's ingress bandwidth. This would require a distinct policy, if the shaper values change, for every tunnel interface, or a policy on the physical interface that has a class per tunnel (matches against tunnel destination address).


e.g.


!assume local outbound interface not oversubscribed


policy-map NESTED_QOS_512K

class class-default

shape average 512000

service-policy QUEUE_DATA


policy-map NESTED_QOS_768K

class class-default

shape average 768000

service-policy QUEUE_DATA


policy-map NESTED_QOS_1500K

class class-default

shape average 1500000

service-policy QUEUE_DATA


interface Tunnel1

service-policy output NESTED_QOS_786K


interface Tunnel2

service-policy output NESTED_QOS_512K


interface Tunnel3

service-policy output NESTED_QOS_1500K


interface Tunnel4

service-policy output NESTED_QOS_512K


e.g.


!assume local outbound interface not oversubscribed


class-map match-all Tunnel1

match group (ACL that matches tunnel1 destination address)


class-map match-all Tunnel2

match group (ACL that matches tunnel2 destination address)


policy-map outbound_tunnels


class Tunnel1

shape average 768000

service-policy output QUEUE_DATA


class Tunnel2

shape average 512000

service-policy output QUEUE_DATA


Interface Ethernet 0

service-policy outbound outbound_tunnels


If all the far side bandwidths exceed your local outbound physical bandwidth, then you should have both tunnel policies, that shape each tunnel, and a physical interface policy.


e.g.

!assume local outbound interface is oversubscribed


policy-map NESTED_QOS_512K

class class-default

shape average 512000

service-policy QUEUE_DATA


policy-map NESTED_QOS_768K

class class-default

shape average 768000

service-policy QUEUE_DATA


policy-map NESTED_QOS_1500K

class class-default

shape average 1500000

service-policy QUEUE_DATA


interface Tunnel1

service-policy output NESTED_QOS_786K


interface Tunnel2

service-policy output NESTED_QOS_512K


interface Tunnel3

service-policy output NESTED_QOS_1500K


interface Tunnel4

service-policy output NESTED_QOS_512K


!assumes qos-preclassify


interface Ethernet0

service-policy output QUEUE_DATA

steve_mils Wed, 04/30/2008 - 06:04
User Badges:

Hi,


The physical egress interface is 10Mbps on the headend and we have remote sites that have 10, 4 and 4Mbps internet links respectively. Based on your feedback it seems that the best approach for the headend would be to have separate tunnel policies which shape at 10, 4 and 4Mbps and then an overall 10Mbps shaping for the physical interface. We'll also use qos pre-classify.


Many thanks!

Joseph W. Doherty Wed, 04/30/2008 - 09:20
User Badges:
  • Super Bronze, 10000 points or more

You shouldn't need a shaper for your 10 Mbps tunnel or egress interface, i.e. something like this might do:


policy-map NESTED_QOS_4M

class class-default

shape average 4000000

service-policy QUEUE_DATA


interface Tunnel1

!the physical interface's policy will manage congestion for this tunnel


interface Tunnel2

service-policy output NESTED_QOS_4M


interface Tunnel3

service-policy output NESTED_QOS_4M


!assumes qos-preclassify


interface Ethernet0

service-policy output QUEUE_DATA


steve_mils Thu, 05/01/2008 - 02:45
User Badges:

I've just found out that the 10Mbps circuit to the provider is a burstable to 40Mbps and is provided over a 100Mbps bearer. Would it therefore be advisable to create a QUEUE_DATA_10M for the 10Mbps tunnel because this could in theory get swamped?

Joseph W. Doherty Thu, 05/01/2008 - 03:33
User Badges:
  • Super Bronze, 10000 points or more

Assuming your remote sites are physically limited to 10, 4 and 4, yes, if burstable to 40 Mbps and the physical interface can support 100 Mbps or gig. Shape the 10 Mbps tunnel at 10 Mbps. No need to do anything at the physical interface, since both a 100 Mbps physical and 40 Mbps logical can deal with the combined 18 Mbps without the need to queue.


No, if your handoff circuit is truly stuck at 10 Mbps. The physical 10 Mbps at the interface will "shape" both your total traffic and the 10 Mbps tunnel.

Actions

This Discussion