Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
ntk
New Member

QOS Bandwidth Reserve problem

Hi,

I have configured QOS with dedicted BW, for the call from Branch to the main location over a 64 Kbps link.

But during the peak of SAP traffic, the voice quality is severely affected.

The branch config is given below.

Config:

class-map match-all voice-sig

match access-group 102

class-map match-all voice

match ip precedence 5

match access-group 103

!

!

policy-map WAN

class voice

priority 24

class voice-sig

bandwidth 8

class class-default

fair-queue

!

!

interface Serial0

mtu 300

ip address 10.170.0.6 255.255.255.252

encapsulation ppp

ip tcp header-compression iphc-format

service-policy output WAN

ip rtp header-compression iphc-format

!

!

access-list 102 permit tcp any any range 2000 2002

access-list 102 permit tcp any eq 1720 any

access-list 102 permit tcp any any eq 1720

access-list 103 permit udp any any range 16384 32767

I tried even with multilink also.

Any input on why this is happening and

Thanks in Advance,

Naveen

8 REPLIES
Blue

Re: QOS Bandwidth Reserve problem

Over a 64kb link it is absolutely necessary to configure fragmentation due to serialization delay. It will take almost 200ms for a 1500 byte data packet to go out. If voice packets are waiting your jitter will be 10 times worse than the recommended 20ms for best quality voice.

Go back to multilink and set up "ppp multilink fragment delay 20" and "ppp multilink interleave".

Here is a good doc:

http://www.cisco.com/en/US/partner/products/sw/iosswrel/ps1839/products_feature_guide09186a0080087d30.html#1039675

You should set up your class maps to match on ip precedence only. Access lists do not work well (or at all) when used in an output service policy. Your IP phones or dial peers will be setting IP precedence (or DSCP) for voice and control. To see if your service policy is actually working use a "show policy-map interface multilink X".

Have Fun, Dave

ntk
New Member

Re: QOS Bandwidth Reserve problem

Hi Dave,

Thanks a lot for your valuable inputs.

In my config I had put mtu size of 300 to reduce the serialization delay.

In the given scenario, During the normal traffic, the voice quality is really good. But voice getting degraded during peak SAP traffic. Even though the BW reservation is done through "priority 24" it is not effectively reserving during the Peak hours. The catch may be in the access list, and I need to try with only match ip precedence.

Any further input on this will be very helpful,

Thanks

Naveen

New Member

Re: QOS Bandwidth Reserve problem

Changing the MTU size on an interface is not recomended.

Try doing fragmentation.

New Member

Re: QOS Bandwidth Reserve problem

Hello Naveen,

I agree with Alan, changing the MTU is normally not a good idea to achieve fragmentation for voice. What IOS are you running? If it is a newer IOS you can use both LLQ and FRF12 fragmentation.

Making the assumption you are using G.729 voice compression (you should insure you are), you have enough bandwidth for one call. Cisco says plan on 20k per call, which is a little more than you need. How many concurrent calls are you making? If more than one there is a problem.

Are your voice packets truly marked with IP Precedence of 5? Cisco’s IP phones do that automatically but the routers do not. So if you are running VoIP with CallManager things should be okay. If you are running VoIP initiated at a router you will need to add an “ip precedence 5” statement in your VoIP dial peer.

I also question the reservation of 8K for call setup. It does not take 8K of bandwidth to setup one call. It only takes a couple of packets. This bandwidth is not reserved unless you need it but I would remove that class altogether from my LLQ configuration.

Regards,

Bob

ntk
New Member

Re: QOS Bandwidth Reserve problem

Hi Bob,

I have already tried with multilink with interleaving and header compressions. Then, the voice and normal data packets are going through fine. But the remote SAP users were not able to connect to the central site. I tried even with only multilink (removing all other configs), the same was the result. So I was forced to remove the Multilink. This pb is not listed anywhere as a bug also, and the customer doesnt want to have an upgrade of the IOS.

Thats why I opted for physical link config with header compresion and MTU size change.

Regarding the setup, its a pure Ip telephony setup.

Any input on this will be highly appreciated..

Thanks & Regards,

Naveen

Silver

Re: QOS Bandwidth Reserve problem

Seen this issue in the past where the larger packets don't get through . You confirm by doing an extended ping and increasing the packet size by 100 bytes - I recall it will fail after around 1200 bytes. It is caused by the fragment delay set too low - try changing it from 10msec to 20msec. At 64K the 10msec doesn't quite seem to work. Your config should look something like this -

!

interface multilink 1

ip address 10.1.1.1 255.255.255.252

bandwidth 64

fair-queue

ppp multilink

ppp multilink fragment-delay 20

ppp multilink interleave

multilink-group 1

ip rtp header-compression

service-policy output blah-blah-blah

!

!

interface serial 0/1

no ip address

bandwidth 64

encapsulation ppp

ppp multilink

multilink-group 1

!

New Member

Re: QOS Bandwidth Reserve problem

A 10msec chunk of packet on a 64Kbps link is a very small chunk so the increase to 20msec is probably a good idea.

Regards,

Bob

ntk
New Member

Re: QOS Bandwidth Reserve problem

I tried to bundle two 2Mbps LL using multilink (to the same location) without even using interleaving.

Then also, the SAP traffic was not going through (Users were not able to connect). So I dont think it is a fragment delay problem. This was done on the same router to which the 64 K links are connected.

Any inputs.....

Regards,

Naveen

226
Views
9
Helpful
8
Replies
CreatePlease to create content