QoS on GRE tunnels - multiper tunnels per interface

Unanswered Question
Feb 27th, 2006
User Badges:

This is maybe a bit of a mind bender..

I am having problems with QoS policy and multiple GRE tunnels on the same interface.

If I could do without the GRE tunnels I would... But I see using GRE tunnels in this case as not much different than if I had used IPSEC tunnels for security reasons...

FYI - I have read the GRE and VPN Tunnel QoS documents -- including the use of the pre-classify command and where to apply the service policies, but I still can not get the results I desire.


Location A is a central locaiton which has a 45Mb connection to a private IP network. Locations B through Z should be understood to represent remote locations on the Private IP network that are connected at speeds from 500Kb to 9Mb.

Data is classified into three classes: gold, silver, and bronze(a.k.a. class-default)

Packets are marked by an input service-policy applied to the ethernet interface of the router.


class-map match-any mark-gold

{gold matching criteria}

class-map match-any mark-silver

{silver matching criteria}

policy-map mark-traffic

class mark-gold

set ip dscp {gold-dscp-value}

class mark-silver

set ip dscp {silver-dscp-value}

interface gig 0/0

service-policy input mark-traffic


The GRE tunnels are configured with generic traffic shaping to prevent the central location from sending out faster then the remote locastion can receive. In addition, an output service policy is

applied to prioritize the gold and silver traffic.


policy-map shape-to-1024Kb

class class-default

shape average 972800 !!!! shape to 95% of remote speed

service-policy output prioritize-outbound-traffic

class-map gold

match ip dscp {gold-dscp-value}

class-map silver

match ip dscp {silver-dscp-value}

policy-map prioritize-outbound-traffic

class gold

priority percent 33

class silver

bandwidth percent 25


class class-default


interface tunnel1

desc location A to location B example

bandwidth 1024


service-policy output shape-to-1024kb



I know that all the traffic associated with each tunel is a.) shaped appropriately and b.) prioritized

gold, silver, and bronze. The shaping and service policies on the tunnels seem to work fine. The show policy-map interface commands bear this out.

The problem seems to be when the traffic from each tunnel makes it to the serial hardware interface.

Occasionally we have big bursts of data resulting in congestion on the outgoig serial interface.

By default the serial hardware queueing strategy is FIFO. As the flows from the multiple tunnels converge on the serial interface they are treated in a fifo manner and, as can be expected with fifo, the first hog to the trough wins. i.e. My 9Mb site runs roughshod over the smaller sites...

What I want to happen is for ALL the gold traffic to be favored over the silver and bronze, just like

within the tunnel. What I see happening is that some gold traffic is delayed in the fifo process and jitter results...

I tried applying the prioritize-outbound-traffic policy as an outbound service-policy on the serial

interface. The show policy-map interface counters seem to indicate that this does not work. The 5 min offered load counters for the gold, silver, and default do not add up to the known offered load on the serial ionterface (they don't even add up to 10% of the known offered load!!)

As I stated above I have played with the pre-classify command and with the placement of the service policy - on the tunnel - and on both tunnel ahd hardware...)


1. Is fancy-queuing within fancy-queuing like this supported?

2. If it is supported, what would be a workable configuration?

3. If this is unsupported or simply is the wrong approach, what would be the correct queueing strategy for this scenario? e.g. Do I shape-only on the tunnel and fancy-queue-only on the hardware?

Thanks in advance for any insights.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 3.3 (3 ratings)
mheusinger Mon, 02/27/2006 - 19:01
User Badges:
  • Green, 3000 points or more


shaping on the tunnel and queueing on the hardware interface seems to be the natural choice to me in your case. It will make sure, that the remote sites are not overrun. And queueing (CBWFQ/LLQ) on the Serial will distribute ressources among your traffic classes the way you would like it.

Usually I would expect to see proper DSCP settings on the GRE packets and output policy on the serial matching. Or you could use "qos-preclassify" to match on original IP header.

If all of this (for whatever reason) does not work you could also use qos-groups on the input and mark and queue GRE on the output based on the qos-groups.

Hops this helps! Please rate all posts.

Regards, Martin

pkhatri Mon, 02/27/2006 - 21:02
User Badges:
  • Purple, 4500 points or more


While I don't think there is any documentation on CCO that states that your configuration (shaping on Tunnel interfaces followed by queueing on physical interface), I suspect that is the case. My reasoning for saying this is the similar sort of issue present when using Frame-relay traffic-shaping (FRTS). With FRTS, non-FIFO queueing methods are not supported on the physical interface. Your config results in a similar issue.

Now, you mentioned that you were experiencing drops on the physical interface. There are two possible ways of doing this (and Martin has mentioned one of this too):

- configure the shaping on your Tunnel interfaces so that the sum of the shaped bandwidths is less than or equal to your physical interface bandwidth. That way, you will never overrun your physical interface.

- the alternative is to do all queueing and shaping on the physical interface. Use pre-classify and then create one class for each tunnel. Within this class, use hierarchical queueing to different on the basis of your gold/silver bandwidth. An example follows:

class tunnel1

match access-group


class tunnel2

match access-group


class tunnel3

match access-group


policy-map prioritize-outbound-traffic

class gold

priority percent 33

class silver

bandwidth percent 25


class class-default



policy-map PhysIntfPolicy

class tunnel1

bandwidth 128

shape average 256000

service-policy prioritize-outbound-traffic

class tunnel2

bandwidth 64

shape average 128000

service-policy prioritize-outbound-traffic

The above policy will allow you to burst above the guaranteed bandwidth and will allow you to prioritise traffic within this as per your requirements. In such a case, having an aggregate shaped rate of greater than the physical interface bandwidth will not matter since the traffic will not get shaped to this rate unless there is capacity on the link.

Hope that helps - pls rate the post if it does.


john.dean Thu, 03/02/2006 - 07:04
User Badges:

Cisco TAC was able to add the following additional information.

According to the design guide:


Multiple traffic policies on tunnel interfaces and physical interfaces are not supported if the interfaces are associated with each other. For instance, if a traffic policy is attached to a tunnel interface while another traffic policy is attached to a physical interface with which the tunnel interface is associated, only the traffic policy on the tunnel interface works properly".


But good news is start from IOS 12.4 main line, We do support 2 independent policies on the tunnel and main interfaces, with 7200 and low end platforms.


This Discussion