CBWFQ-output

Unanswered Question
Mar 15th, 2010

Hi,

We have 256 point to point link between two branches.Users will access 5 application from branch A to branch B hosted in different servers.I have applied QOS in the WAN interface with five classes matching the traffic designated to application server ips.

But still whenever my bandwidth shoots up, i can find any drop packet in the show policy-map interface command.

All application are very slow with respect to the users.

Please explain how to interpret the output and is there any other way to allocate dedcated bandwidth for the application during the high bandwidth utlization time.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
Aaron Harrison Mon, 03/15/2010 - 04:58

Hi

I presume you have applied some bandwdith allocation within a policy map and applied the policy map to the interface?

Also that you have created ACLs at either end that match correctly (i.e. at server end, match the source IP of the app server, and at the remote site match the dest IP of the app servers?), and applied a service policy to the outbound interface at the server end as well as at the remote end?

If you have it all properly configured, then your problem is likely that you just don't have enough bandwidth. QoS can manage your bandwidth during congestion, but if you are still seeing a lot of drops in the classes you have defined then you don't have enough bandwidth for your requirements.

You may be able to apply some compression on the link, but how well this would work depends on many things..:

http://www.cisco.com/en/US/docs/ios/qos/configuration/guide/hdr_comp_roadmap_ps6350_TSD_Products_Configuration_Guide_Chapter.html

Regards

Aaron

Please rate helpful posts...

uthayaman Mon, 03/15/2010 - 05:12

Thanks for the reply.

Sorry there is a small correction.I cant find any packet drop in the output.But still my applications are very slow.If you need any more configuration details, i will share...

Find my output.

show policy-map inter fas 0/0
FastEthernet0/0

  Service-policy output: policy1

    Class-map: 1class (match-all)
      652436 packets, 51900326 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name 1qos
      Queueing
        Output Queue: Conversation 265
        Bandwidth 23 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 96/6482
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: 2class (match-all)
      179219 packets, 40189933 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name 2qos
      Queueing
        Output Queue: Conversation 266
        Bandwidth 15 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 16/3742
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: 3class (match-all)
      43394 packets, 5668008 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name 3qos
      Queueing
        Output Queue: Conversation 267
        Bandwidth 32 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 4/684
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: 4class (match-all)
      16544 packets, 2919785 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name 4qos
      Queueing
        Output Queue: Conversation 268
        Bandwidth 21 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 2/108
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: 5class (match-all)
      1669927 packets, 144111253 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name 5qos
      Queueing
        Output Queue: Conversation 269
        Bandwidth 8 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 52/8529
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: 6class (match-all)
      8885730 packets, 1821281003 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name 6qos
      Queueing
        Output Queue: Conversation 270
        Bandwidth 16 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 578/157836
        (depth/total drops/no-buffer drops) 0/0/0

    Class-map: 7class (match-all)
      1487248 packets, 2148856748 bytes
      5 minute offered rate 0 bps, drop rate 0 bps
      Match: access-group name 7qos
      Queueing
        Output Queue: Conversation 271
        Bandwidth 64 (kbps) Max Threshold 64 (packets)
        (pkts matched/bytes matched) 157/77903
        (depth/total drops/no-buffer drops) 0/0/0

Aaron Harrison Mon, 03/15/2010 - 05:16

Hi

What about the traffic output of show service policy in the other direction (i.e if that output was from remote>servers, then from servers> remote)?

Aaron

uthayaman Mon, 03/15/2010 - 05:23

I have taken this out put from another branch where we have MPLS vpn link to the Aggregator location.At the aggregator location(where the servers are placed) we have enough bandwidth which never shoots beyond 50 % of the link capacity.So i have not applied any QOS over there.

At all the branches i am facing the same issue.

sean_evershed Mon, 03/15/2010 - 05:44

Have you tried enabling Netflow to ensure that the your QoS definitions are honoured end to end?

Does your carrier have have any queues defined within their network that you need to match at your router?

uthayaman Mon, 03/15/2010 - 06:10

How do i analyze with netflow analyzer.Which tool is best one to do.Let me analyze and update it.

sean_evershed Mon, 03/15/2010 - 06:44

A guide to Neflow can be found here

http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6555/ps6601/prod_white_paper0900aecd80406232.html

Once configured issuing the show ip cache verbose flow command will give you a wealth of information.

It can also be used to find things like the top 10 talkers on the WAN link.

Commercial Netflow tools can be expensive. A list can be found in the same link above. Manage Engine worked well for me.

Actions

This Discussion

Related Content