Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
Search instead for
Did you mean:
ip rtp priority and LLQ (dCBWFQ) for QoS not working on 7513/RSP16 routers
After receiving numerous complaints regarding voice quality from a T1 customer we went on site to monitor their network and voice quality. What we found blew us away. After using the 2 Cisco recommended QoS configurations; ip rtp priority and LLQ (CBWFQ) we found that neither provided even a reasonable amount of QoS during periods of heavy bulk data transfer. No doubt many other T1 lines are suffering the same problem - just not as badly as those with heavy bulk data transfers.
During periods of heavy bulk data transfer towards the customer network on T1 lines we are seeing significant voice/rtp packet loss in spite of having used 3 different methods recommended by Cisco for implementing QoS on our T1 lines. We are running RSP Software (RSP-PV-M), Version 12.3(22), RELEASE SOFTWARE (fc2) on our 7513 routers each equipped with RSP16 processors and VIP4-80 blades all with maximum memory configurations. There is no evidence that overall traffic on the blade (or router as a whole) has any effect on the voice traffic loss. We get the same results in the middle of the night on a given T1 as we would get in the middle of the day. We have of course confirmed that the voice traffic is not being lost prior to reaching the outbound T1 port. All T1 lines are point-to-point using Cisco HDLC or PPP transport protocol with Cisco routers at the customer prem.
We have not tried LFI or cRTP headers because according to Cisco these options will have a minimal positive effect and will increase router processing loads. I have to wonder though given our experience with the recommended Cisco QoS configurations if LFI and cRTP wouldn't be more benefitial than Cisco is leading us to believe.
Here are the methods that we have tried.
fair-queue (this results in VIP based fair queue) ip rtp priority 16384 16383 1000
According to Cisco's documentation this should assure timely delivery of all voice RTP packets within the specified port range. All bulk data will be secondary and delayed as long as necessary to make sure that the voice traffic gets through in a timely manner. The above example is supposed to allocate up to 1000kbps of bandwidth for RTP traffic in a port range for 16384 to 32767. We have verified that our RTP traffic is on ports within that range and we have also verified that the DSCP is set correctly to EF (101110) and that we are not even coming close to exceeding the voice bandwidth allocatation - just one voice stream at 90kbps will show substantial packet loss when there is data saturation.
When we added a rate limit statement to the same interface we discovered that it reduced the voice packet loss but didn't eliminate it. We then discovered that by reducing the burst levels specified in the rate-limit statement we could eliminate the voice packet loss altogether and maintain decent jitter (10-30ms). Problem is that even though we had 1.4Mbps transfer rate in the rate limit - the performance of surfing the web was very poor. Speeds tests showed a very slow ramp up to full speed. We increased the burst limits and that helped the web surfing response times but it immediately increased voice jitter to >80-120ms and after about 6KB of burst we started seeing voice packet loss again. If we removed fair-queue on the interface (fifo mode) - voice packet loss just got even worse. Yes, I have confirmed that when fair queue is enabled we are in VIP based fair queue mode. I have also confirmed that cef is working which Cisco says is a requirement for optimum performance.
The failure to prevent voice traffic high jitter and/or loss is very frustrating because according to everything that I have read - with an ip rtp priroity configuration in place voice traffic should always be put to the head of the queue and go through ahead of all other traffic.
Method 2: (Note: 192.168.x.x substituted for real IP addresses in configurations below)
Low Latency Queuing (LLQ - CBWFQ)
Using Rate limiting policies
Using Traffic Shaping policies
We found that rate limiting alone was causing very poor performance for bulk data if used with an LLQ configuration Using traffic shaping has helped but this seems to be like a fairly exotic solution to what should be working with simpler configurations above. We also found that bulk data performance is suboptimal during periods without any voice traffic. We get about a 10% reduction in overal available bulk data bandwidth when there is not voice traffic and the 'interactive responsivness' of web surfing is notably more sluggish.
We are of course using a Cisco recommended LLQ configuration on the customer's router and that appears to be working very well. During even the most saturated tests, not a single packet of voice data originating from teh customer's network is lost and jitter is excellent, typically <30ms and predominately around 15ms.
According to Cisco's documentation, we should be able to implement an LLQ configuration for just the priority traffic leaving out any service policies that address any other traffic type. All non-priority traffic is supposed to be treated as bulk data and throttled accordingly but that simply is not the case. High voice jitter and packet loss are happening the majority of the time.
We have verified that our VIP blades are essentially idle as is the case with our RSP.
description Shaping for T1 1.5M subinterfaces with 80% LLQ QoS for Hosted customers
shape average 1400000 5600 5600
description LLQ for Network Voice Services with 80% allocation
priority percent 80
class-map match-any Voice-Network
match access-group name Voice-Network
match dscp ef
ip access-list extended Voice-Network
permit udp any 192.168.18.0 0.0.0.255 range 16384 32767
permit udp 192.168.18.0 0.0.0.255 any range 16384 32767
permit udp any 192.168.18.0 0.0.0.255 eq 5060
permit udp 192.168.18.0 0.0.0.255 any dscp ef
With the amount of router horsepower that we have in the 7513 we are shocked at the very poor QoS performance. Even a 1721 router does much better at QoS.
Does anyone know why we are experiencing these QoS issues? Has anyone else had similar problems?
Any ideas for a solution other than the heavy handed LLQ with traffic shaping?