cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2857
Views
12
Helpful
10
Replies

Applying QoS to Outbound Internet Connection

mnlatif
Level 3
Level 3

Hi,

We have 2-T1 that are being used for Internet Access. During most of the day the BW is almost at 100% due to mainly SMTP (Attachments), FTP traffic, which results in pretty bad HTTP performance.

As a first step i enabled WFQ on the serial interface, which should help in giving preference to low-volume traffic.

I want to implement the below also but wanted to get input, if they would be useful

1. Use Class\Policy Maps to identify SMTP,FTP traffic and then police the outbound rate, so that this traffic doesn't take 100% Interface BW.

2. Can the same Policy Maps (as above) be used to rate-limit inbound traffic also ?

3. Is the service-policy command applied to the Physical Interface Or to the Sub-Interface (configured with the actual IP) ?

4. Is there a way to identify File Transfers over HTTP as opposed to regular HTTP HTML Browsing using NBAR ?

5. If I want to rate-limit SMTP\FTP traffic, only when the interface is fully congested , how can I do that ? (Using Custom Queuing ?)

Thanks,

Naman

1 Accepted Solution

Accepted Solutions

"In your config, class-default is going to get rest of available bandwidth?" Don't believe it will get the rest, but will compete with the other defined classes. Which leads to your next question.

"And that will establish the ratio of how the packets are going to be serviced?" Not clear exactly how the ratio is established.

Reason I didn't define a bandwidth percentage within class-default with FQ active, it hasn't been clear to me whether it controls bandwidth ratios or provides a floor for the FQ flows, at least on non-7500 platforms. See the section "Understand Platform Differences" within http://www.cisco.com/en/US/tech/tk39/tk48/technologies_tech_note09186a00800fe2c1.shtml#platform.

What's important in this configuration example is restricting FTP flows to just one queue with a minimal bandwidth guarantee.

If it's important to really set the class-default bandwidth (and its bandwidth ratio to other classes, and sometimes it is), I then use FIFO for the class, just as all the other classes are doing (on non-7500s).

Getting bandwidth hogs deprioritized (e.g. FTP), getting critical traffic prioritized (e.g. VoIP) and using FQ for everything else usually works well in most cases.

View solution in original post

10 Replies 10

Joseph W. Doherty
Hall of Fame
Hall of Fame

1. Yes you can with CBWFQ.

2. Yes, but often not very effective unless you set the rate very low. This because the first bottleneck is likely your upstream ISP sending to you.

3. Depends on the interface type. For example, ATM can be done per PVC but frame-relay is on the parent.

4. Not that I'm aware of.

5. Normally no, although if you had some script that monitored the bandwidth and changed the policy on the fly. . .

Other suggestions:

Besides rate limiting, you can implement shaping.

For #5 (and indirectly #1), I find it better to not limit the bulk traffic classes but instead permit them a minimal floor of bandwidth using CBWFQ. They can then use any available excess bandwidth but will not monopolize the bandwidth.

My guess is your WFQ is performing better then you might assume but suspect the ISP side isn't doing it inbound toward you. If not, ask them to activate it and see if it helps.

The advantage of CBWFQ over WFQ is you can much better customize how traffic shares the bandwidth.

Thanks for the very helpful response. I totally agree with the idea of implementing CBWFQ to have more granular control.

Am I correct in understanding that when you use the "bandwidth" command (in a policy map) the matching traffic can exceed that rate (If BW is available) and is this queue is serviced along with other queues in a round-robin fashion ?

With the "priority" command the matching traffic cannot exceed the configured rate but the queue is always serviced ahead of other queues ?

If my goal is to prioritize HTTP traffic over FTP traffic when the link is congested then

configure "bandwidth" command for HTTP and enable WFQ for the default-class. Is that the right approach ?

Thanks for the help.

Naman

Correct, a class can use more bandwidth then what the bandwidth statement provisions. The bandwidth statement sets a floor and also determines the ratio to other classes when there's excess bandwidth but more than one class desires it.

Classes are serviced in a proportioned round-robin fashion based on the bandwidth settings. (Similar to CQ or WFQ when IP precedence is set.)

E.g.

class x

bandwidth percent 25

class y

bandwidth percent 5

if both want excess bandwidth, they would be serviced in a 5:1 ratio.

Also correct, the class, or classes, with a priority statement is/are serviced first. It also will drop traffic in excess of the specification.

To prioritize HTTP over FTP, I would suggest something like:

class ftp

match protocol ftp

policy-map cbwfq

class ftp

bandwidth percent 1

class class-default

fair-queue

or

policy-map cbwfq

class ftp

bandwidth percent 1

class class-default

bandwidth percent 25

serial #

service-policy cbwfq out

When using FQ within the default class on many Cisco platforms, you have no control of the ratio between the multiple individual FQ queues and the defined class or classes.

PS:

Remember, the ideal placement of this policy is not only on your interface facing out but on the provider's interface facing you (out from them too).

If you don't specify a bandwidth percent, this means that it is going to get rest of the bandwidth ?

E.g.

In your config, class-default is going to get rest of available bandwidth ? And that will establish the ratio of how the packets are going to be serviced ?

++++++++

policy-map cbwfq

class ftp

bandwidth percent 1

class class-default

fair-queue

++++++++++

"In your config, class-default is going to get rest of available bandwidth?" Don't believe it will get the rest, but will compete with the other defined classes. Which leads to your next question.

"And that will establish the ratio of how the packets are going to be serviced?" Not clear exactly how the ratio is established.

Reason I didn't define a bandwidth percentage within class-default with FQ active, it hasn't been clear to me whether it controls bandwidth ratios or provides a floor for the FQ flows, at least on non-7500 platforms. See the section "Understand Platform Differences" within http://www.cisco.com/en/US/tech/tk39/tk48/technologies_tech_note09186a00800fe2c1.shtml#platform.

What's important in this configuration example is restricting FTP flows to just one queue with a minimal bandwidth guarantee.

If it's important to really set the class-default bandwidth (and its bandwidth ratio to other classes, and sometimes it is), I then use FIFO for the class, just as all the other classes are doing (on non-7500s).

Getting bandwidth hogs deprioritized (e.g. FTP), getting critical traffic prioritized (e.g. VoIP) and using FQ for everything else usually works well in most cases.

Hi,

With reference to the above suggestion, Is there any advantage in configuring 'shaping' for the high BW usage traffic ?

E.g. Based on the NBAR analysis the SMTP and FTP are hogging the interface. If i configure shaping for SMTP, Am I correct in assuming that shaping is 'only' activated when the 'link' is congested ?

If none of the traffic is being marked for DSCP values, Is there any advantage enabling WRED ?

Thanks,

Naman

"Is there any advantage in configuring 'shaping' for the high BW usage traffic ?"

Perhaps, if you want to strictly cap its bandwidth usage. (Shaping is much like policing except the former queues [i.e. delays] traffic over the rate specified, the latter drops such traffic.)

"Am I correct in assuming that shaping is 'only' activated when the 'link' is congested ?"

No, shaping and policing are limits to the traffic regardless of what else is going across the interface. (Which is why I prefer setting queue bandwidths, that set bandwidth allocations only when there's congestion.)

"If none of the traffic is being marked for DSCP values, Is there any advantage enabling WRED ?"

Two answers:

DSCP is really a shortcut for the downstream devices to make QoS decisions without further packet analysis. E.g. use NBAR once to mark importance of traffic, downstream devices only need look at DSCP tag. (If no downstream analysis, no advantage to DSCP tag.)

WRED uses IP precedence (same as CS portion of DSCP) to adjust bandwidth allocations. Without it, all flows treated alike with proportioned bandwidth to each flow. Usually works wells if many light bandwidth flows with few heavy but if many heavy and few light, the latter often suffer. If borderline between number of concurrent light and heavy flows, tagging light with high IP precedence and leaving heavy to default to no IP precedence, will sometimes suffice. Otherwise better to place heavy flows into their own controlled class or classes.

PS:

If heavy traffic is both FTP and SMTP, you can place both in same class or two classes. Usually makes little difference to bulk TCP traffic.

dongdongliu
Level 1
Level 1

of course, you can use CBWFQ on you interface, but, if the traffic are all "available" for work etc. maybe you`d better think about gain more bandwidth.

Dongdong suggests you might need more bandwidth. This is certainly possible. I didn't suggest it because often I wait until I see the results of QoS before doing so. (Or after doing an in-depth traffic analysis.)

However, there's often confusion when additional bandwidth is necessary or whether QoS is the better option. Further, they are not mutually exclusive and the best solution often is a combination of both. But how do you know?

For time sensitive applications, VoIP and interactive video being the "poster child", sufficient bandwidth to keep the packets from being delayed is what's really needed.

For non-time sensitive bandwidth consuming applications, such as the "poster child" FTP, bandwidth isn't really required except with regard to meeting an objective for how long it takes to transfer the data. FTP will happily transfer across a 110 baud link just as it will across a 10 gig link. The latter taking much, much less time. However, by design, TCP bulk transfers attempt to use all available bandwidth.

The issue we often face today is we mix, across the same path, time sensitive applications with bandwidth consuming applications, the latter again, often attempting to use all the bandwidth. While trying to do so, they often impact the time sensitive applications.

It's often expensive to provide enough bandwidth to keep the two types of traffic from conflicting especially on WAN circuits. QoS addresses this problem nicely if you can use it to insure the time sensitive traffic gets the bandwidth it needs to work well and the bandwidth consuming traffic only gets whatever is still available. (To keep the latter from breaking, it's best to guarantee it some minimal bandwidth.)

To summarize, use QoS to prioritize time sensitive applications over bandwidth consuming application. Insure the time sensitive applications have sufficient bandwidth. Provide bandwidth for the bandwidth consuming applications to meet you goals for transfer rates.

I agree with joseph. When congestion occurred, the first consideration is qos. Finally consider increasing bandwidth.

You can use netflow to analysis application in the traffic

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card