cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
641
Views
12
Helpful
10
Replies

Adding to service policy to mark server as priority

thomuff
Level 3
Level 3

Today, I have service policy for voice. I want to add a class to mark traffic from a Web server out to the WAN as priority traffic Below is the config. Can you tell me if it is right? Also, should I use 1 or cs1

class-map match-any QoS-VoIP-RTP-Trust

match ip dscp ef

class-map match-any QoS-VoIP-Control-Trust

match ip dscp cs3

match ip dscp af31

!

!

policy-map QoS-Policy-Trust

class QoS-VoIP-RTP-Trust

priority percent 70

class QoS-VoIP-Control-Trust

bandwidth percent 5

class class-default

fair-queue

intG0/1

service-policy output QoS-Policy-Trust

add

ip access-list extended Web-Apps

permit ip host 10.2.1.1 any

class-map match-any Apps

match access-group name Web-Apps

class Apps

bandwidth 3200

set precedence 1

10 Replies 10

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Thomas,

you can use ip precedence 1 or ip dscp CS1 that are the same.

I would mark inbound on the internal LAN interface, and I would leave only the bandwidth command on the outbound scheduler.

marking in the scheduler is possible but it should be more clear if you mark inbound.

on the scheduler you can match

class-map match-any dataplus

match ip dscp CS1

also you are using priority 70 percent that is possible but it looks like a little high on a LAN link.

it is true that unused resources are left to other classes but this limit the capability to assign resources to new classes.

Hope to help

Giuseppe

Thanks for the reply. The g0/1 is ethernet handoff to our Carrier's router which is providiing WAN connectivity.

Joseph W. Doherty
Hall of Fame
Hall of Fame

As Giuseppe also notes, 70% is rather high for LLQ (priority classes). You might want to evaluate how much bandwidth you need to support VoIP and set accordingly.

On many CBWFQ implementations, a sum of 75% is all you can define unless you modify the default bandwidth reservation. This means your additional class might be rejected (as it appears the normal 75% default bandwidth allocation rule would be violated).

It would probably be clearer not to mix absolute bandwidths with percentage bandwidths.

In your reply to Giuseppe, since you mention there's a downsteam carrier router that actually controls access to the WAN, your want you markings to conform with their expectations. It's possible the carrier would utilize a AF2x or AF3x for priority business traffic. Although IP Precedence 1 is a higher priority than BE, some of the later QoS schemes use one of the AF1x class markings to indicate scavenger or less than BE traffic. Again, this should be discussed with your carrier.

Alhough your policy may set ToS marking correctly, prioritization won't happen unless there's congestion. Assuming the WAN interface offers less than your g0/1 interface bandwidth, you might want to first shape to match to WAN bandwidth. (Although if the carrier does QoS correctly, you shouldn't need to worry about this.)

IP precedence 1 and CS1 are not the same marking. IP Prec 1 is only the 1st, and same, 3 bits of CS1 or AF1x. If your using DSCP ToS, use CS1 not IP Prec 1. (If you use IP Prec 1, the bottom 3 bits are left as is, and you might latter need to tell the difference between CS1 vs. AF11 vs. AF12 vs. AF13; you'll not guaranteed what you've marked.)

Also, on many platforms/IOSs, using class default FQ with defined non-LLQ class bandwidths can unguarantee their class bandwidths. If there will be a lot of traffic in class default, and the other non-LLQ class bandwidth guarantees are crucial, you might need to deactivate FQ.

Here is my test policy that applied to the switchport that connects the server

class-map match-any HeyMON

match access-group name HeyServer

ip access-list extended HeyServer

policy-map Qos-HeyNow

class HeyMON

set ip precedence 1

intg0/40

service-policy input Qos-HeyNow

ip access-list extended HeyServer

permit ip host 192.168.2.1 any

It seems to be working. However now I do need to do it at the client location where traffic destined for 192.168.2.1 is marked.

Also, Here is a snippet from the carrier.

1 00 001xxx cs1 20 8 cs1 8 dscp cs1 queue 2 L2

9-15 cp9 9 dscp cp9 queue 2

af11 10 dscp af11 queue 2

cp11 11 dscp cp11 queue 2

af12 12 dscp af12 queue 2

cp13 13 dscp cp13 queue 2

af13 14 dscp af13 queue 2

cp15 15 dscp cp15 queue 2

Binary DSCP DSCP

TOS X= 0 or 1 Hex Decimal Decimal

I forgot to ask

Correcting the other settings, would I add the class to the existing policy map?

"Correcting the other settings, would I add the class to the existing policy map? "

Well, your original policy was an output policy and this one is an input policy, so unclear how you want to combine although you certainly can use both policies and/or logically have an input or output policy that combines them logically.

Sorry I meant to say could I instead of would I?

The input is looking for traffic coming from the server(192.168.2.1) at hosting facility.

Then, at the remote location, I have the service policy applied on the serial interface output. I would inverse the access list looking for any traffic destined for 192.168.2.1

Yes, you could have an input policy on the branch ingress link that marks the packet. However, unless something downstream from that point is going to utilize the marking, often no point in doing so.

Realize the real purpose of marking packets is to save downstream devices the effort of reclassification. Then they can make decisions about packet QoS treatment looking at a few bits of the ToS octet rather than IP addresses, ports numbers, other packet contents, etc.

Special packet treatment really is generally only necessary when there's possible congestion that might impact the network application performance requirements. On many networks, primary congestion points, that often impact network application performance, are initial WAN link interfaces. On "cloud" WANs, cloud egress often is another primary congestion point.

That is exactly our issue, backups across WAN links at night causing congestion.

Often the solution for that is to either deprioritize such backup traffic with QoS or otherwise insure it doesn't monopolize the bandwidth. FQ, alone, unless there are many backup flows, often works well. If you are working with a WAN cloud, depending on the cloud technology and logical topology, you either need to indirectly manage your QoS using a provider QoS template or avoid congesting within the cloud using shaping (before your traffic enters the WAN cloud).

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: