I have several sites connected with each other by the ISP. Each site has a 10Mbps FastEthernet port pointing to the ISP cloud. I have GRE tunnels from one site to each other with OSPF running inside.
I'd like to implement QoS , the question is what are the best practices to do it.
I was thinking about the following config.
class-map match-any REALTIME
match ip precedence 5
match ip dscp ef
class-map match-all MANAGEMENT
match ip precedence 6
class-map match-all CRITICAL_DATA
match ip precedence 4
shape average 10000000
service-policy out 10Mbps
The same policies will be applied to each site.
Assuming that markings are copied from the IP header to the GRE header, any voice traffic in any tunnel should get its priority.
But I have the following questions:
Do I need the 10Mbps shaper in the parent policy?
(Traffic is rate-limited on the ISP side, are there any positive impacts of shaping on my outisde interface?)
What about routing protocols updates and tunnel keepalives? Do I need to assign them their own class, perhaps there are some default markings for them?
Your config will not work, as you have applied the policy to the fe. Once inside the GRE tunnel, there are no ToS bits appied to the GRE header.
You need to write and apply the policy to the GRE tunnel.
Pls correct me if I am mistaken but as far as I know the ToS bits of the original packet are copied to the GRE outer header.
In later IOS versions it is happening by default. Older versions required the qos pre-classify command.
That is only the case if you use qos re-classify in the GRE tunnel, this was not shown in the config supplied.
However I have found the qos pre-classify is not reliable, so when I use GRE tunnels, and need to shape traffic into the tunnel, I apply my QoS policy as it ENTERS the tunnel, and not when it leaves the WAN interface.
regarding why to shape, i think this cisco Doc. answer it ( http://www.cisco.com/en/US/tech/tk543/tk545/technologies_tech_note09186a00800a3a25.shtml ), although if the interface is configured with speed 10 i think you would not need to shape.
Regarding routing protocols this doc is also usefull
My understanding and experience matches yours, i.e. that on later IOSs, GRE tunnels automatically copy ToS octet, so on such IOSs, your policy should be fine. However, as Vladimir noted, if the interface is actually running at 10 Mbps, there's no need to shape at 10 Mbps. You would only need the shaper if the interface was running at 100 Mbps. (If it were, and the logical restriction was exactly 10 Mbps, it works better to run the interface at 10 Mbps.)
Oh, and if the interface is faster than logical bandwidth, shapers often offer much benefit.
BTW, on "qos pre-classify", with the later IOSs, it can still be used to match against other packet header contents, pre-tunnel. For instance you could match against a source or destination address info but not use NBAR to examine URL contents.
As for routing protocol updates and tunnel keep alives, both by default, should fall into your class-default, and with your FQ config, should be okay. If you were to remove the FQ, you might want to place routing protocol updates into their own class (also matching IP Prec 6 or 7 works for some). I'm unsure about match criteria for the tunnel keepalives. Also for keepalives, would expect you would need very severe congestion for keepalives to consider tunnel down (i.e. possible, but unlikely[?] - again with FQ, they should be fine).
Thank you for your replies!
I have some other sites with 2Mbps uplinks where my outside interface speed is set to 10Mbps or even 100Mbps. Traffic is simply rate-limited by the ISP at 2Mbps. So there is sense to use 2Mbps shapers there?
I'm also unsure about GRE keepalives and their match criteria. I've had some rare issues when tunnels failing even FQ was configured. Perhaps, it was because of congestion inside the ISP cloud.
One more question, will shapers together with policy maps greatly increase processor usage on the routers?
Border routers are mainly 3800 series routers with average cpu load at 30%
"So there is sense to use 2Mbps shapers there? "
Yes there is. When an ISP rate-limits, they tend to drop all traffic (beyond bandwidth limit) without regard to the type of traffic. With a shaper, you can manage the congestion as you choose. (BTW, when an ISP provides something like 2 Mbps of "Ethernet" bandwidth, you need to shape slower, by 5 to 15%, if your shaper doesn't account for L2 overhead [most don't].)
"I'm also unsure about GRE keepalives and their match criteria. I've had some rare issues when tunnels failing even FQ was configured."
Could be, especially at ISP cloud egress if bandwidth isn't controlled. (E.g. VPN and raw Internet traffic sharing link and/or multiple VPNs on link which oversubscribe link's bandwidth capacity.)
"One more question, will shapers together with policy maps greatly increase processor usage on the routers?
Border routers are mainly 3800 series routers with average cpu load at 30%"
Does add some CPU load, but from my experience doesn't seem to add much. Perhaps 5%(?). With only 30% average utilization, likely you have sufficient headroom.
I've got few more questions.
I've read that fair-queue is enabled by default on low-speed interfaces(less than 2Mbps) and is working best at low speeds.
What about10Mbps interfaces? Are there any disadvantages on applying such policy-map (with fair-queue enabled on class-default) on them?
And what about random-detect? I do understand that this command enables router to randomly drop packets here and ther, tcp window sizes are reduced and tcp traffic flows are limited. Is it effective on 10Mbps link?
Pls share your expirience.
The default queuing for high speed interfaces is fifo. This is quite acceptable on non-overbooked links.
Random detect is is viable solution as well. We have used it on Metro-LAN connections with a 100Mb interface speed and a bandwidth of 20-30Mb.
If the WAN-speed equals the interface speed, queuing and policing is only effective when the link becomes overbooked.
Using Priority Queueing for real-time traffic may be an easier option here.
"I've read that fair-queue is enabled by default on low-speed interfaces(less than 2Mbps) and is working best at low speeds.
What about10Mbps interfaces? Are there any disadvantages on applying such policy-map (with fair-queue enabled on class-default) on them? "
There's interface WFQ and policy-map FQ. The former is only really suitable for low-speed interfaces (as you note), the latter will work with either.
One disadvantage of any fancy queuing is the addtional CPU overhead, but its benefit, when there's congestion, I believe out weighs the extra overhead (plus the policy map verison doesn't impose too much extra load).
"And what about random-detect? I do understand that this command enables router to randomly drop packets here and ther, tcp window sizes are reduced and tcp traffic flows are limited. Is it effective on 10Mbps link? "
Yes, it can be effective on any congested interface, including 10 Mbps. However as you note, it's really targeted for TCP flows and there are many considerations to get it to work best. It can help achieve best "goodput" for high bandwidth demanding TCP flows, but since it offers no priorization (except in drop selection with WRED), AQM (active queue management), I believe, is often a better QoS technology to first implement (although RED/WRED can also often be used with it).