I have configured QOS with dedicted BW, for the call from Branch to the main location over a 64 Kbps link.
But during the peak of SAP traffic, the voice quality is severely affected.
The branch config is given below.
class-map match-all voice-sig
match access-group 102
class-map match-all voice
match ip precedence 5
match access-group 103
ip address 10.170.0.6 255.255.255.252
ip tcp header-compression iphc-format
service-policy output WAN
ip rtp header-compression iphc-format
access-list 102 permit tcp any any range 2000 2002
access-list 102 permit tcp any eq 1720 any
access-list 102 permit tcp any any eq 1720
access-list 103 permit udp any any range 16384 32767
I tried even with multilink also.
Any input on why this is happening and
Thanks in Advance,
Over a 64kb link it is absolutely necessary to configure fragmentation due to serialization delay. It will take almost 200ms for a 1500 byte data packet to go out. If voice packets are waiting your jitter will be 10 times worse than the recommended 20ms for best quality voice.
Go back to multilink and set up "ppp multilink fragment delay 20" and "ppp multilink interleave".
Here is a good doc:
You should set up your class maps to match on ip precedence only. Access lists do not work well (or at all) when used in an output service policy. Your IP phones or dial peers will be setting IP precedence (or DSCP) for voice and control. To see if your service policy is actually working use a "show policy-map interface multilink X".
Have Fun, Dave
Thanks a lot for your valuable inputs.
In my config I had put mtu size of 300 to reduce the serialization delay.
In the given scenario, During the normal traffic, the voice quality is really good. But voice getting degraded during peak SAP traffic. Even though the BW reservation is done through "priority 24" it is not effectively reserving during the Peak hours. The catch may be in the access list, and I need to try with only match ip precedence.
Any further input on this will be very helpful,
I agree with Alan, changing the MTU is normally not a good idea to achieve fragmentation for voice. What IOS are you running? If it is a newer IOS you can use both LLQ and FRF12 fragmentation.
Making the assumption you are using G.729 voice compression (you should insure you are), you have enough bandwidth for one call. Cisco says plan on 20k per call, which is a little more than you need. How many concurrent calls are you making? If more than one there is a problem.
Are your voice packets truly marked with IP Precedence of 5? Ciscos IP phones do that automatically but the routers do not. So if you are running VoIP with CallManager things should be okay. If you are running VoIP initiated at a router you will need to add an ip precedence 5 statement in your VoIP dial peer.
I also question the reservation of 8K for call setup. It does not take 8K of bandwidth to setup one call. It only takes a couple of packets. This bandwidth is not reserved unless you need it but I would remove that class altogether from my LLQ configuration.
I have already tried with multilink with interleaving and header compressions. Then, the voice and normal data packets are going through fine. But the remote SAP users were not able to connect to the central site. I tried even with only multilink (removing all other configs), the same was the result. So I was forced to remove the Multilink. This pb is not listed anywhere as a bug also, and the customer doesnt want to have an upgrade of the IOS.
Thats why I opted for physical link config with header compresion and MTU size change.
Regarding the setup, its a pure Ip telephony setup.
Any input on this will be highly appreciated..
Thanks & Regards,
Seen this issue in the past where the larger packets don't get through . You confirm by doing an extended ping and increasing the packet size by 100 bytes - I recall it will fail after around 1200 bytes. It is caused by the fragment delay set too low - try changing it from 10msec to 20msec. At 64K the 10msec doesn't quite seem to work. Your config should look something like this -
interface multilink 1
ip address 10.1.1.1 255.255.255.252
ppp multilink fragment-delay 20
ppp multilink interleave
ip rtp header-compression
service-policy output blah-blah-blah
interface serial 0/1
no ip address
I tried to bundle two 2Mbps LL using multilink (to the same location) without even using interleaving.
Then also, the SAP traffic was not going through (Users were not able to connect). So I dont think it is a fragment delay problem. This was done on the same router to which the 64 K links are connected.