mpls cloud bandwidth problem

Unanswered Question

I need opions on best options to control bandwidth

we have an OC3 ATM with a 20Mb PVC connecting to Qwest MPLS cloud in our main data center.

All endpoints connect into this MPLS cloud and all have full t1 access.

We discovered today when someone with a PC in a remote office with a T1 opens a webpage on the internet the proxy server sends the page to the PC at such a high rate that it instantly saturates the remote t1 and breaks everone in the office

WWW is just one example, any application can instantly saturate the t1 because the systems in the data center are sending at a high rate and the OC3 is sending at a high rate and the only thing that slows the connection down is the Qwest frame switch when the Frame T1 have exceeded CIR of 1024 and BC of 512 and some.

Before we migrated to MPLS we didnt have this problem because it was a hub and spoke setup and every spoke was an ATM pvc with VBR-NRT configured so the hicap hub router knew what the availible bandwidth was at the remote frame-relay site.

Does anyone have any thoughts about controlling the flow of traffic on the ATM OC3 router as to not overflow the remote frame-relay t1?

thanks

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
Peter Paluch Wed, 09/30/2009 - 23:51

Hello Steven,

Is the traffic shaping an option for you? I believe that a shaping could be used to prevent saturation of the T1 lines. You have yourself stated the possible settings - the CIR is 1024 kbps and the Bc is 4096 b. Is such a solution feasible for you?

Best regards,

Peter

Joseph W. Doherty Thu, 10/01/2009 - 04:40

As Peter suggests, shaping (for downstream cloud egress bandwidth) could be the solution. This should work very well if your traffic flow is still hub-and-spoke. However, if there's spoke-to-spoke traffic (often a "feature" of MPLS clouds), the alternative is to take advantage of the QoS framework (if any) provided by your MPLS vendor.

One way to implement the hub shaper is by using a CBWFQ policy that defines a class per remote site (which normally matches against dest. IP addr. block), and shape within the class. CBWFQ class shapers, on most Cisco implementations use WFQ which might be all you need. However, you often have the option to implement a subordinate policy if you what to prioritize different types of traffic differently.

When shaping with low bandwidth links, there's a good chance (in US or Canada) you don't need to shape for FR CIR; link bandwidth often works well. Also, most shapers only count L3, so you might want to shape a little slower to account for L2 overhead, but for FR and just data, you probably would be fine just shaping for link bandwidth.

BTW, there is a limitation to the number of classes (i.e. remote sites) that can be supported. Believe for the later CBWFQ IOS implementations it's at least 256(?).

vmiller Thu, 10/01/2009 - 07:28

To add to these responses, It would serve you well to understand how your provider will deal with marked traffic. Each provider does things a little differently.

Joseph W. Doherty Thu, 10/01/2009 - 10:45

Although perhaps tedious, CBWFQ class shaping could still be an option.

As to number of CBWFQ supported classes, found this:

Q. How many classes does a Quality of Service (QoS) policy support?

A. In Cisco IOS versions earlier than 12.2 you could define a maximum of only 256 classes, and you could define up to 256 classes within each policy if the same classes are reused for different policies. If you have two policies, the total number of classes from both policies should not exceed 256. If a policy includes Class-Based Weighted Fair Queueing (CBWFQ) (meaning it contains a bandwidth [or priority] statement within any of the classes), the total number of classes supported is 64.

In Cisco IOS versions 12.2(12),12.2(12)T, and 12.2(12)S, this limitation of 256 global class-maps was changed, and it is now possible to configure up to 1024 global class-maps and to use 256 class-maps inside the same policy-map.

huangedmc Thu, 10/01/2009 - 16:21

In regard to the above response:

"Also, most shapers only count L3, so you might want to shape a little slower to account for L2 overhead"

Could you please post URL to a document that explains it?

We recently migrated to Verizon's MPLS.

We purchased for a 300Mbps ethernet connection to the MPLS cloud at the head-end.

Verizon told us they had to shape it down to 255Mpbs to account for approx. 15% of L2 overhead, but couldn't produce any document that supports their claim.

Back to the OP's situation...if doing CBWFQ is a hassle, you can consider using Packeteer's PacketShaper.

If most of your traffic is hob to spoke, all you need is a big shaper at your head-end.

Joseph W. Doherty Fri, 10/02/2009 - 05:27

Do you have a non-disclosure agreement with Verizon? If so, ask about documentation again.

Verizon is correct about shaping down. L2 overhead percentage actually varies based on frame size. I.e. L2 overhead always the same per frame, so as frame payload shrinks overhead percentage increases (much like IP headers vs. IP payload).

Unless you have a shaper that can account for per packet L2 overhead (e.g. http://www.cisco.com/en/US/docs/ios/12_0s/feature/guide/12sl2os.html), you might need to shape for worst case (especially if working with strigent QoS requirements, e.g. VoIP). For less strigent QoS requirement, you might shape for your average case.

Some (not exactly what you want, I suspect) documentation:

http://www.cisco.com/en/US/docs/ios/ios_xe/qos/configuration/guide/eth_overhead_acctng_xe.pdf

http://sd.wareonearth.com/~phil/net/overhead/

vmiller Fri, 10/02/2009 - 14:11

You will need to get GOLD CAR from Vzb so that your markings get honored in the cloud. The Vzb MPLS cloud supports 5 queues. The attachment shows how they do it.

Attachment: 

Actions

This Discussion