I am trying to configure a LLQ on a Cisco 837. The configuration is:
class-map match-all TEST-CLASS
match access-group 100
priority percent 50
no ip address
no atm ilmi-keepalive
dsl operating-mode auto
interface ATM0.1 point-to-point
vbr-rt 256 256 1
encapsulation aal5mux ppp dialer
dialer pool-member 1
service-policy output TEST-POLICY
access-list 100 permit icmp any any
However, if I replace the TEST-POLICY policy-map with just:
I get the same effect. In both cases ping times are ~100ms. Without with any service-policy the ping times are more variable 200-1500ms, so the queueing is having an effect, but I'm not sure whether the LLQ out of the router is working (and whether this is just WFQ operating, rather than FIFO).
On an uncongested outbound path, ping times are ~30-40ms (inbound path is not congested).
As far as i can say, the ping uses echo request (upstream) and echo reply (downstream), and your LLQ is of course applied outbound affecting only the upstream traffic, accordingly the excess delay when using LLQ may be because there is no LLQ configuration configured for the downstream traffic on the router in front of your router.
I would agree with you, and that was my first consideration, except that during the test, the downstream bandwidth utilisation was very low (a few tens of kb/s), and QoS only comes into operation when the link is congested. What I am seeing is no real different in response times when using LLQ and WFQ (with no LLQ configured).
I ran the 837 IOS images through the Feature Navigator and it implied that a PQ is only available in IP PLUS. However, with IP PLUS the same behaviour is observed.
As far as I could see in a 836, with QoS configured through SDM, the packets generated by the router itself bypass the NBAR and/or QoS processing.
I got into this conclusion after I noticed in the SDM QoS monitor that these packets were not increasing the counters.
I've also had some difficulty understanding QoS functionality in the 836 with an IP PLUS IOS image. As an example, in the SDM QoS monitor I can see the packet counters increasing, but the transmit rate counters are always zero even with the interface under congestion, and I have manually configured the bandwith values for the interface with the correct configuration for up/down bw.
The "show policy-map interface" cli command also shows the same thing, i.e., the packets get counted but the bandwith is not processed.
In my opinion there is also a drawback in using QoS with the 83x routers, as they dont have much processing power, in situations under network congestion the cpu is under more heavy load with the added tasks for QoS processing causing some degradation in network performance, so in my opinion one cannot expect very good ping times with QoS under network congestion, even giving the higher priority to icmp.
Did you try already monitoring the cpu usage of your 837 under network congestion?
This is actually a pretty cool feature, i didn't even know it existed until I was looking for a solution to advertise a subnet (prefix in BGP talk), only if a certain condition existed. This is exactly what conditional advertisements does
j ai une question j ai achete un routeur cisco 887VA-k9 , je le configuré avec la configuration ci- dessous
si je le lier avec mon pc portable sur l un de ses ports directement ça marche toute est bien ( la connexion internet + m...
Attached policy provides CLI access to the Cisco 4G router over text messaging. Two files are in the attached .tar file:
2. PDF with instructions on how to load and use the .tcl file.