08-29-2008 11:15 AM - edited 03-03-2019 11:20 PM
We recently install a new clear channel DS3 and have installed an NM-1T# card and everything seems to be okay but I have noticed that some of our ping response times have increased. I have been looking at the dsu bandwidth on some of the sub-interaces and it is saying that the bandwidth is the full 45 meg ds3. I do have traffic-shape commands on each subinterface so that the actual bandwidth to each site is 768000. My question is, do I need to put dsu bandwidth commands in each subinterface so the bandwidth shows the correct bandwidth for each subinterface when I do a sho int command.
Example of the sho int command
Serial1/0.102 is up, line protocol is up
Hardware is DSXPNM Serial
Description: connected to xxxxxxxx
Interface is unnumbered. Using address of FastEthernet0/1 (192.168.xx.xxx)
MTU 4470 bytes, BW 44210 Kbit, DLY 200 usec,
reliability 255/255, txload 20/255, rxload 6/255
Encapsulation FRAME-RELAY IETF
Example of interface itself:
interface Serial1/0.102 point-to-point
description connected to xxxxx
ip unnumbered FastEthernet0/1
traffic-shape rate 768000 19200 19200 1000
08-29-2008 11:30 AM
Hello Phil,
if you are using modular QoS use the bandwidth command under each subif to reflect the CIR of 768000 bps
bandwidth 768
default bw values is equal to physical bandwidth on each subif. So percentages commands could be pointing to a wrong value.
dsu bandwidth is a layer1 parameter that you set only once on the physical interface.
Hope to help
Giuseppe
08-29-2008 11:33 AM
no QOS being used. Strictly data, no voice riding on DS3.
The bandwidth to each site is a full T1 with a CIR of 768.
08-29-2008 11:47 AM
Hello Phil,
ok so no negative effects of default BW setting.
What you see should be the effect of the shaping process that is buffering exceeding frames so an increase in delay can be acceptable.
Try to monitor what the shaper is doing.
Hope to help
Giuseppe
08-29-2008 11:50 AM
I will do that. I have also noticed a difference in the MTU size and DLY usec between us and the site. The site MTU size is 1500 and my MTU size is 4470. The DLY at the site is 20000 and here it is 200 usec. Could this cause a significant increase in the response time? Thanks for your help
08-29-2008 11:58 AM
Hello Phil,
an MTU mismatch at the two ends of the link can have negative effects on several aspects including TCP throughput, and some routing protocols like OSPF can be stucked.
I would suggest you to use MTU 1500 on the physical interface.
DS3 have a default of 4470 but you can change it.
notice that bandwidth and delay are both configurable parameters on interfaces.
Hope to help
Giuseppe
09-01-2008 03:51 AM
If the far (remote) side's physical link is T-1, you might try increasing your shaper to permit 1.5 Mbps. (Depending where the frame circuits are, often you won't see much of an issue going beyond CIR.)
What does the far (remote) side do with regard to QoS?
You might be able to monitor your shapers using a "show traffic-shape que". See if they are queuing traffic when you notice ping times increase.
Do the far (remote) sides terminate more than one PVC per site?
You might also need to tune your other traffic-shape values. When ping times increase, do they stay high or do they vary between high and low?
You shouldn't need to place dsu bandwidth statements within each subinterface, although an ordinary bandwidth statement can be useful for other higher protocols.
Is the DS-3 oversubscribed? I.e. do the sum of the PVC exceed its total bandwidth? If they can, its possible congestion is on the physical interface and not the PVCs.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: