My company accesses WAN web applications (webmail) over a DMVPN tunnel. A problem with this is that normal Internet traffic can consume the entire circuit and then the corporate webmail becomes really slow. I decided to make a QoS policy to protect the DMVPN traffic. Creating the outbound policy was very simple. I made a CBWFQ policy and applied it outbound to the outside interface. The problem with this is that the Internet link typically congests inbound, not outbound. I did some research and found a couple of solutions.
One: Police non-DMVPN inbound traffic from the Internet to leave room for the DMVPN traffic. The problem with this solution is that now the Internet traffic cannot spike to full circuit speed when there is no DMVPN traffic.
Two: Request the ISP to provide QoS for ESP traffic destined to us. I was hoping to find a solution that I could apply to our router so that we could deploy the solution to all of our DMVPN sites without having to negotiate with each ISP to configure QoS policies.
Three: Tunneling all Internet access through regional hubs. This solution isn't an efficient use of bandwidth; however, I see the benefits of being able to centralize security devices.
So I played around a little bit and came up with another solution.
Create an outbound QoS policy on the inside interface of the router. (It has to be an outbound QoS policy to allow for queuing QoS methods.)
The trick is that you first have to shape down the traffic to match the download rate of the Internet circuit so that the interface can reach congestion. In fact, I decided to shape the traffic to 90 percent of the maximum download rate so that I knew my router was dropping the packets before the ISP. Then I created a policy within that shaped policy to apply my queuing based QoS.
For simplicity I am just tagging packets to DSCP 21 at the DMVPN head end and then using WRED as the queuing policy at the remote site.
shape average [BANDWIDTH * .9 kbps]
int [INSIDE INT]
service-policy output qos_in
So far the results have been very positive. Before applying this policy we were experiencing slowness with our webmail. We have been running this code for months now and it hasn't been slow since. When I look at the policy-map stats I see more DSCP 0 packets being dropped than DSCP 21. I have also added some tweaks to the WRED queue sizes because I wanted the policy to react faster to bursts of traffic.
I'm looking for comments and suggestions. Has anyone else found ways to deal with inbound QoS on an Internet pipe?