My network was at the margin before, but I think more and more of my traffic is above the configured PVC bandwidth, and is being flagged as discard-eligable. Later today, I'll be in touch with AT&T for some testing, so I don't have hard numbers yet.
Question: Can I use the rate-limit command to ensure my router never sends more than 192Kbit/sec worth of traffic onto my ckt? My port size is 384K and PVC is 192K. This sort of restriction won't have any ill affects, but I just don't want any packets being flagged as discard-eligable in the frame-relay cloud.
Can I ommit the burst-rate numbers, and just tell the interface to never send traffic faster than 192Kbit/sec?
I'm at a customer's site today, and need to know this quickly. I can always come back though :-)
I don't recommend rate-limit but rather frame-relay traffic shaping
Quick configuration for you:
1) configure a map-class
map-class frame-relay FRTS
frame-relay cir 192000
frame-relay bc 24000
2) apply this class to the frame-relay interface
frame-relay class FRTS
3) Verify shaping is configured correctly with the show frame pvc [dlci] command:
sh frame pvc 103
PVC Statistics for interface Serial0/0 (Frame Relay DTE)
DLCI = 103, DLCI USAGE = UNUSED, PVC STATUS = ACTIVE, INTERFACE = Serial0/0
input pkts 0 output pkts 0 in bytes 0
out bytes 0 dropped pkts 0 in pkts dropped 0
out pkts dropped 0 out bytes dropped 0
in FECN pkts 0 in BECN pkts 0 out FECN pkts 0
out BECN pkts 0 in DE pkts 0 out DE pkts 0
out bcast pkts 0 out bcast bytes 0
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
pvc create time 00:02:12, last time pvc status changed 00:02:02
cir 192000 bc 24000 be 0 byte limit 3000 interval 125
mincir 96000 byte increment 3000 Adaptive Shaping none
I agee with Edison, much prefer shaping.
If your frame-relay PVCs are defined via subinterfaces, you can also use generic traffic shaping. See http://www.cisco.com/en/US/products/sw/iosswrel/ps1835/products_configuration_guide_chapter09186a00800bd8ef.html
GTS can also adjust its speed when it sees FECNs/BECNs. (Believe that's also possible with Edison's approach too.)
If you're doing frame-relay in US or Canada, often providers don't drop DE traffic.
I did extensive testing in the presence of a.) Full load (384KBit/sec) and b.)With just my misbehaving application running. It was suggested that I use frame-relay traffic shaping, rather than rate-limiting. My results proved interesting.
ATT indicated the ingress frame-relay switches will tag the packets above the PVC rate (192K) as DE. However, domestically, ATT will allow you to operate at port-speed all day, and not discard packets due to the amnt of free bandwidth available. However, it's a different story at the NNI (Network-to-Network Interface). My ckt takes me from Baltimore MD, (ATT), to Anchorage AK (NNI), then into "AlasCom" teritory, which is an ATT subsidiary, then into Fairbanks AK, which is the termination of the ckt. The configuration of the NNI, is to drop DE packets in the presence of network congestion.
Doing a full-blast test for ~2minutes yielded 60 dropped packets, and I was operating at 180% of CIR (CIR=192K). I then put frame-relay traffic-shape into place with a rate of "340000" (340Kbit/sec). ATT still saw some DE packets, but none were dropped.
Given that time was runing out, i decided to lift my traffic-shaping, and let my misbehaving application use the frame-relay & hopefully a failure would occur. It did. ATT saw no dropped DE packets at all, but what I was seeing on my routers was more interesting. Remember my endpoints are Fairbanks AK, and Baltimore MD.
On the Fairbanks router, I issued "show frame-relay PVC" (I only have one). The "out pkts dropped" was incrementing at about 100 every ~2-4 seconds! (I was not using traffic shaping, so it couldn't account for my outbound dropped packets. The "out bytes dropped" was around 300,000 (300KBytes) ((significant)) Now, at the same time, on the Baltimore router the "in DE pkts" statistic was also counting up at about the same rate.
Now, as I understand it, the Baltimore router was seeing inbound DE-flaggged packets because those packets were above 192Kbit (CIR) but weren't dropped. What's more concerning, is how / why my router is dropping so many outbound packets, even when I have traffic-shaping turned off. Lastly, when I do a "show int s0/2 my physical frame-relay interface), I get Output Drops on the Input queue that also count up at about 100 every 5 seconds or so.
The Baltimore router is a 3825 with 256MB memory and a regular T1 WIC card. The Alaska router is an old 2610 with 48MB memory, and T1 WIC card. Nothing's changed hardware wise, so I suspect a slight increase in traffic has 'put me over the edge'. What do you think?
Ok, so you have a hub-spoke design. Please tell us the bandwidth allocated on each location.
You need to implement traffic-shaping in all locations for optimal result. By default, it's assumed the interface can transmit at 1.5Mbps, so implementing traffic-shaping in all participating routers will regulate the transmission speed.
Both my Baltimore, and Fairbanks AK ends are 384K ports, and 192K PVCs. They arrive on a T1 transport with the DS0's at 64Kbit ea. The CSU/DSU is configured to pay attention to only 6 channels on the T1 (thus 64 X 6 = 384)
Question: Even in the presence of proper traffic shaping, if my bandwidth requirements surpass what can be handled, the net result, will the the routers dropping tons of traffic, correct? Or, can the routers slow down the rate at which ACKs are sent back to the PC, to make the TCP stack at layer4-5 slow down the rate at which it puts packets on the wire? We'd have to see if this would make the application barf at layer7..
Would it make a difference if I said that some of the traffic was multicast?
Thanks for the help!
You can configure the CSU/DSU to 384kbps but the default bandwidth interface (note.- show interface sx/x) will dictate the QoS. Applying the traffic-shaping on the interface will change that default behavior.
If the hub is connecting to different spokes out of the main interface instead of creating multiple subinterfaces, I recommend creating per-PVC traffic shaping at the hub.
If you provide a config from the hub, I can explain it better.
Now, addressing your questions:
> Even in the presence of proper traffic shaping, if my bandwidth requirements surpass
> what can be handled, the net result, will the the routers dropping tons of traffic, correct?
With proper traffic-shaping on both ends of the link, you shouldn't be dropping a 'ton' of traffic.
Which is the hub in your network, Baltimore or Fairbanks ? Which are the spokes ? I didn't get a clear answer on the bandwidth allocated to the hub and the spokes.
> Would it make a difference if I said that some of the traffic was multicast?
No, we are shaping 'frames'. They can be non-IP traffic for that matter.
I'm not at the customer's site, and I don't have remote access, but I can tell you that I am using sub-interfaces on both ends, like:
frame-relay interface-dlci 185 ietf
The network is less complicated than you think. The customer wanted a point-to-point ckt, but was too expensive, Baltimore-to-Fairbanks, so they're using frame-relay like a point-to-point ckt. There are no hubs or spokes; it's just point-to-point. Two frame-ports one in Fairbanks, one in Baltimore, and a single 192K PVC mapped between those ports.. No other PVCs at all.
Question: When doing a test, i was moving a huge (30MB) file over the ckt. When doing the math between packets per second and bytes per second, I only achieved 600bytes per packet, with a 30MB file transfer running for 15 mins. Is frame-relay performing fragmentation that would account for me NOT seeing near-1500-byte frames? I know that fragmentation can hurt throughtput, but didn't know if it was a huge issue in this instance.
Apply the configuration that I posted at the beginning of this thread on both routers and perform the test again. Post back with results.
Regarding your question about seeing 600 bytes per packet, my guess it might be that it's related to path MTU. 576 bytes per packet, the maximum Internet sized packet guaranteed, could be what you're seeing.
However, this isn't the Internet, per se, it's commercial frame-relay. What I was suspecting, was that my IP packets were being fragmented before they were encapsulated into 'frames' on ATT's network. If my equipment is set for 1500byte packets, and I'm seeing them split into ~600 bytes packets on the other end, something somewhere is spending alot of time chopping them up, which is hurting throughput.
Understood you're not running across the actual Internet but you're still using IP. Following IP rules, up to 576 bytes should always be available, although larger is permitted. My point was, it's possible you're getting 600 bytes sized packets, but I thought it also possible the IP packets might be exactly 576.
Continual fragmentation often does impact performance, however if path MTU is working correctly, the sender will know the packets are being fragmented and then they can be sized, at the sender, to avoid fragmentation. I.e. even though the received packets are not 1500 bytes, they might not be fragmented along the path. Although your sender knows it can send up to 1500 bytes, it's not always required do so.
When you mention poor performance using the example of 30 MB taking 15 minutes, assuming my math is correct, I get about 267 Kbps. I agree the smaller packets you're seeing will reduce performance, but how much faster did you expect it to be?
Well, I wasn't complaining about the overall speed of transfer for the 30MB file really; I was just concerned that there was a massive inefficiency here. I know that frame-relay can only put so many frames per millisecond on the wire, and if I have 2X the amount of frames, then that might degrade things.
With that being said, the unit sending that 30MB file was a Ciprico NAS appliance running a customized Linux OS by the vendor. I guess there's a chance that its MTU is not sent to 1500Bytes, I must admit that I didn't try files from any other source, since I didn't notice the packet-size issue until much later in the day when I'd left the customer's site. I primarily was investigating drop-outs on the frame-relay, versus small packet sizes. I'll look at this again when I go back.
Unclear about your concern that ". . . frame-relay can only put so many frames per millisecond on the wire . . .". Frame-relay, per se, has the same bandwidth limitations of link bandwidth, the technology itself usually isn't much of an issue for performance. However, that's not to say there are no additional issues vs. a point-to-point leased line.
We've touched upon the difference between port speeds and CIRs. The latter, "should", if you adhere to it, provide the same level of performance as a dedicated point-to-point link of similar bandwidth. If you exceed it, frames might be discarded, or not.
A couple of other issues that don't seem to apply to your usage but can arise with frame relay is asymmetrical bandwidth, either access link to the cloud and/or CIR rates. Another is multiple PVCs sharing a physical port, especially when their CIRs oversubscribe the port's bandwidth.
If you limit your bandwidth to CIR rates, you should obtain performance similar to a dedicated leased line. If not, good reason to complain. If you exceed it, you might not see any difference. (It usually depends on what the frame-relay's internal available bandwidth is.) One hint, the closer you're to your CIR rate, i.e. the less you exceed it, often improves your chances of not seeing any actual discards.
Lastly, keep in mind the difference between discard eligible marked packets vs. congestion notification. The former indicates you're out of contract. The latter, actual frame-relay congestion along the path. Also again, the latter can be used to dynamically adjust your transmit rate so as to avoid actual discards.
Thanks for clarifying this. --I have a question about queue drops I'm seeing on one of my serial interface which is on the Alaska side.
During periods of hi traffic on the the router (during which my application fails) I see 'Output Drops' on the Input Queue count up dramatically. This means one of two things from what I read; I just wanted confirmation.. a.)The switching path thru the router is not optimal (process-switched versus interrupt) or b.)The Serial Interface can't put packets on the wire fast enough, the queue fills up, and the packet is dropped.
I'd read that increasing this queue size, "hold-queue 150 out", for instance, to increase it from 75 to 150, might not always be good if TCP timers time out & the packet is retransmitted.
I need to look at 'show buffers input-interface s0/0/0' to see what packets are clogging the queue, to see if a sub-optimal switching path is to blame.
Could you post the stats you're looking at, and the numbers you're concerned with. Want to make sure we're looking at the right numbers.
fyi: Just leaving, so likely be unable to respond before tonight.
Working night-shift on the West coast I see? :-)
Anyway, when I do a "show int s0/0/0", I get the following line:
Input queue: 30/75/187/0 (size/max/drops/flushes) ; Total output drops: 5045
During periods where the router is handling alot of traffic (close to port-speed), this number counts up by about 500 every 5 seconds or so. --It calms down when the traffic goes down.
I'll have to check this, but I think I'm process-switching my multicast traffic (a bad thing...) The router's CPU is 2-4% all the time (it's a 2610 router from 1999) that would drive up the CPU a bit, and cause the queues to stay more full then they have to be.
"number counts up by about 500 every 5 seconds or so" -- if you mean the output drop counter, drops are often to be expected when you path doesn't support the sender's bandwidth. (Quite common LAN to WAN.)
TCP uses drops to self regulate its speed, so TCP drops alone aren't bad although too high a percentage can be. (I recall a rule of thumb that up to 1 or 2% is okay; don't hold me to it.)
Non-TCP traffic usually doesn't self regulate and there drops just indicate insufficient bandwidth. How bad the drops impacts the non-TCP traffic depends on the non-TCP app.
Input queue: 30/75/187/0 (size/max/drops/flushes) - don't often see packets queued on the input size; only 187 drops though. A couple of links for troubleshooting this issue are: http://www.cisco.com/en/US/products/hw/routers/ps133/products_tech_note09186a0080094791.shtml#topic2 and http://www.cisco.com/en/US/products/hw/modules/ps2643/products_tech_note09186a0080094a8c.shtml
I'm surprised that you're seeing such input queue stats with such a low CPU usage.
Since you're also doing multicast, you might insure fast switching for it hasn't been disabled. Check for no ip mroute-cache commands.
just to state the statistics I'm seeing, which make me think I'm having problems, here they are. They *all* count up by leaps and bounds together. When traffic calms down, they stop counting up:
After issuing, "show int s0/2"
Input queue: 1/75/0/0 (size/max/drops/flushes); Total output drops: 5609
After issuing, "show frame pvc" in Alaska:
out pkts dropped: 1016
out bytes dropped: 256,990
After issuing "show frame pvc" in Baltimore:
in DE pkts: 5708
From reading your prior information, you might be working under some misunderstandings.
The drop counts you're seeing registered on the routers are local to the routers and have nothing to do with drops within the frame-relay cloud. You will see output drops if you overflow the router's queues.
Drops within the cloud are not directly visible to your routers. You can infer them by "in" packet count on far side should match "out" packet count on near side for same time interval. Local "Out" minus far "In" equals cloud path drops.
On your routers, you might see additional (and counted) drops as you shape bandwidth because you've reduced the bandwidth, or not. The latter because the shaper might change the default's queue size or queuing strategy.
Multicast is more likely to cause a bigger jump in drops, with bandwidth reduction, because it doesn't normally respond to backing off its transmit rate when there's drops; as TCP should.
With traffic shaping set to your CIR, you should not see any DE marked frames. If you do, either shaper's parameters don't agree with the provider's, or CIR is incorrectly configured on DLCI by provider (rare, but does happen). (If few, often not worth the time to resolve.)
As you correctly note, DE packets are the first candidates for discard, within the frame-relay cloud, if there's congestion. At CIR, none of your packets should be dropped within the frame-relay cloud (providers can though, but then it's time to see what your SLA really provides). Also, when there's congestion, frame-relay starts marking FECNs, which if you reflect as BECNs, allows some shapers to dynamically decrease transmission speed. (I.e. initially transmit at port speed, if BECNs, back speed off to CIR.)
Given the following conditions / requirements, can someone write me traffic-shaping command I'd need, to accomplish it?
1.) My CIR is 192K, and port is 384K
2.) I'd like to always send data at 192K, but never burst above ~300K
3.) Don't ever send data any slower than 192K (paying attention to mincir value...)
4.) I'd like the traffic-shaping to pay attention to, and respond to BECNs and FECNs. When encountering congestion, back down to the CIR (192K) and no lower. Never burst above 300K.
5.) In order the hold the packets that can't be put on the wire, should I increase the hold-queue from its default of 75, to maybe 150 or so? What's the best value here?
6.) Will Multicast traffic fare well with this setup. I realize that if multicast can't throttle its rate, then traffic-shaping can't do much, and I'm looking at a bandwidth upgrade...
Configuration as follows:-
map class frame-relay (X)
frame relay cir 192000
frame relay micir 192000
frame relay bc 108000
frame relay holdqueu (maximum number)
fram relay adaptive shaping becn
fram relay fecn adapt
frame relay traffic shaping
fram relay class x
Try the above config,
As Edison stated on the previous post, you can implement traffic shaping at you end points to shape your frames at the 192Kbps.
One point to add that, ensure that your (mincir) is equal to your (CIR), the frame relay switch will mark any frames above mincir as discard elegible (DE) at the frame header, but it wont be dropped unless there is a congestion on the Frame relay cloud (SP experience congestion). by making mincir equals to your cir , you will ensure that your frames within 192kb speed is not dropped.
Note: Mincir defaults to half value of the CIR.