05-23-2007 08:53 AM - edited 03-03-2019 05:06 PM
I am trying to configure a LLQ on a Cisco 837. The configuration is:
class-map match-all TEST-CLASS
match access-group 100
!
!
policy-map TEST-POLICY
class TEST-CLASS
priority percent 50
class class-default
fair-queue
!
interface ATM0
no ip address
no atm ilmi-keepalive
dsl operating-mode auto
!
interface ATM0.1 point-to-point
pvc 0/38
vbr-rt 256 256 1
tx-ring-limit 2
encapsulation aal5mux ppp dialer
dialer pool-member 1
service-policy output TEST-POLICY
!
access-list 100 permit icmp any any
However, if I replace the TEST-POLICY policy-map with just:
policy-map TEST-POLICY
class class-default
fair-queue
I get the same effect. In both cases ping times are ~100ms. Without with any service-policy the ping times are more variable 200-1500ms, so the queueing is having an effect, but I'm not sure whether the LLQ out of the router is working (and whether this is just WFQ operating, rather than FIFO).
On an uncongested outbound path, ping times are ~30-40ms (inbound path is not congested).
05-23-2007 10:23 AM
Hi,
As far as i can say, the ping uses echo request (upstream) and echo reply (downstream), and your LLQ is of course applied outbound affecting only the upstream traffic, accordingly the excess delay when using LLQ may be because there is no LLQ configuration configured for the downstream traffic on the router in front of your router.
HTH, please do rate all helpful replies,
Mohammed Mahmoud.
05-23-2007 03:36 PM
I would agree with you, and that was my first consideration, except that during the test, the downstream bandwidth utilisation was very low (a few tens of kb/s), and QoS only comes into operation when the link is congested. What I am seeing is no real different in response times when using LLQ and WFQ (with no LLQ configured).
I ran the 837 IOS images through the Feature Navigator and it implied that a PQ is only available in IP PLUS. However, with IP PLUS the same behaviour is observed.
05-23-2007 09:49 PM
As far as I could see in a 836, with QoS configured through SDM, the packets generated by the router itself bypass the NBAR and/or QoS processing.
I got into this conclusion after I noticed in the SDM QoS monitor that these packets were not increasing the counters.
I've also had some difficulty understanding QoS functionality in the 836 with an IP PLUS IOS image. As an example, in the SDM QoS monitor I can see the packet counters increasing, but the transmit rate counters are always zero even with the interface under congestion, and I have manually configured the bandwith values for the interface with the correct configuration for up/down bw.
The "show policy-map interface" cli command also shows the same thing, i.e., the packets get counted but the bandwith is not processed.
In my opinion there is also a drawback in using QoS with the 83x routers, as they dont have much processing power, in situations under network congestion the cpu is under more heavy load with the added tasks for QoS processing causing some degradation in network performance, so in my opinion one cannot expect very good ping times with QoS under network congestion, even giving the higher priority to icmp.
Did you try already monitoring the cpu usage of your 837 under network congestion?
Rui
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: