09-12-2008 07:49 AM - edited 03-03-2019 11:31 PM
Since implementing QoS on a T1 link it seems that the utilization has dropped ... a lot, and I need a quick review to make sure I didn't fat-finger or do something dumb (it happens).
The interface is a full T1 (private point-to-point), which should be 1.544Mbps = 1544Kbps so for the interface I have specified:
!
interface Serial0/1/0:0
bandwidth 1544
max-reserved-bandwidth 100
ip address 192.168.16.1 255.255.255.252
ip directed-broadcast
ip nbar protocol-discovery
encapsulation ppp
ip tcp header-compression iphc-format
service-policy output PtP-Edge
ip rtp header-compression iphc-format
!
!
a 'sh int' results in:
MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec,
My policy map is:
class-map match-any Database
match access-group name Database_List
match dscp af21
class-map match-any Voice
match protocol rtp audio
match access-group name VoIP_List
match dscp ef
match access-group name Video_Conference
class-map match-any Signaling
match dscp cs3
match protocol rtp
match dscp af31
match access-group name VoIP_Control
!
!
policy-map PtP-Edge
class Voice
priority percent 33
class Database
bandwidth percent 37
random-detect
class Signaling
bandwidth percent 5
class class-default
bandwidth percent 25
random-detect
Did I miss something major in here? my tx/rx load usually was pegged before, and now is only:
reliability 255/255, txload 2/255, rxload 6/255
09-12-2008 07:53 AM
09-12-2008 11:40 AM
Why max-reserved-bandwidth 100 ?
Were you also using the header compression options before application of the policy?
09-12-2008 11:46 AM
max-reserved-bandwidth 100 so that I can explicitly define and control the 'class-default' setting. I tried to follow the QoS SRND pretty closely on regards to the setup.
The IP header compression was applied before the policy was implemented ... I added the RTP header compression afterwords since it was previously unavailable on the old IOS version the routers were running before being upgraded. It is a private link, and we control the routers on each side.
09-12-2008 04:08 PM
Can you provide an explicit reference where setting max-reserved-bandwidth is recommended (not just possible). Reason I ask, about all the Cisco references I recall having seen often seem to advise caution, such as "If you want to override the fixed amount of bandwidth, exercise caution and ensure that you allow enough remaining bandwidth to support best-effort and control traffic, and Layer 2 overhead."
I wouldn't think max-reserved-bandwidth is the problem, although I'm a bit concerned about setting it to 100%. You might try setting it back to the default and see if the behavior changes.
With regard to the two header compression options, I have a hazy recollection they might not be recommended for full T-1 bandwidth or better. These too, I would be surprised if the cause of your issue.
Other than the two prior concerns, again both of which I don't know are causing your issue, yet both are unusual, I don't see anything in your policy or policy stats that should cause such a reduction in performance.
You might try a policy like this:
policy-map PtP-Edge
class Voice
priority percent 33
class class-default
fair-queue
If you really feel the need to treat Signaling or Database special, for either, increase their IP precedence value (I believe FQ within CBWFQ class-default is WFQ). Or, for Signaling, add it to the LLQ class.
PS:
Was the interface using WFQ or FIFO before you applied the policy?
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: