Overloading an oversubscribed port group on 4507R

Unanswered Question
Oct 25th, 2008
User Badges:
  • Bronze, 100 points or more

Hi there,


I have several 4507R's in my network, and we are currently using a few WS-X4418-GB modules. From the documentation, this module has 6 GB links to the backplane, broken up into 2 interfaces that have full 1GB each, then 4 groups of 4:1 that share 1GB each. Similarly, all of the 10/100/1000 modules do a similar oversubcsription at 8:1.


In general, how would you know if you were overloading a port groups on a line card? Is there a reliable way to test this, say if I wanted to generate traffic until I could overload it?


Also, when keeping network latency low is a key design requirement, how do you go about choosing hardware? Would you ever go with 4500's, or would you stick with 6500's?


Thanks in advance

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (1 ratings)
Loading.
Joseph W. Doherty Sun, 10/26/2008 - 16:29
User Badges:
  • Super Bronze, 10000 points or more

I haven't worked directly with the 4500 series, but it's very likely that there are stats that would show drops on overloaded ports. Do note, if you're concerned about latency (from queuing?), it can still happen before you see drops. For that, you would need to monitor queue depth, if the device will report it.


To insure low network latency, assuming you're thinking of queuing latency, insure there's sufficient bandwidth that traffic queues very infrequently or forms shallow queues. If QoS features are available, and if all traffic doesn't require low latency, it's often possible to provide low latency to important traffic even on an otherwise oversubscribed port. Of course the port has to have enough bandwidth such that the important traffic isn't oversubscribed too.


Personally I consider the 4500 series light on bandwidth when you get into multiple gig. The newer -E series, though, goes far to rememdy this. The 6500 can be a much more powerful box, but so much depends on what the chassis is populated with. For instance, a 6500 with a sup32 is limited to a 32 Gbps bus and 15 Mpps, where even the older 4500 using a supIV offers 64 Gbps (fabric) and 48 Mbps. Later 4500 sups offer even better performance, although as you note, line card slots are limited to 6 Gbps (on pre -E).


The architecture of the 4500 lends itself more as a distribtion or core router, due to its fabric architecture. The 6500 can be access, distribution or core, but again, much depends on what the chassis is populated with. (It also supports WAN cards and various service modules.) The sup32 and its bus architecture make for a nice access device, but for distribtion or core, you likely want to use a sup720, which supplies a fabric. (NB: to take full advantage of the sup720, you'll want to use fabric enabled line cards and perhaps with DFCs on them.)


Keeping in mind the limitations of the 4500, although it could still be used for new designs, I think the 6500 is a better choice.


PS:

For edge access, although again the 6500 is great, if you're not looking for every possible feature, the 3750 and 3750-E series offers an interesting way to build a chassis without a chassis.

branfarm1 Sun, 10/26/2008 - 19:33
User Badges:
  • Bronze, 100 points or more

Joseph,


Thanks very much for your response. Clearly I need to see what the new features are on the -E series chassis, and also look into the 6500. Can you clarify though, what you mean when you say "the 4500 lends itself more as a distribution or core router, due to its fabric architecture"?


Thanks

Joseph W. Doherty Mon, 10/27/2008 - 04:27
User Badges:
  • Super Bronze, 10000 points or more

The primary new feature of the -E is additional performance. For instance, the line card slots, I recall, support 24 Gbps. The newer sup also offers the additional performance to provide wire-rate for the newer line cards (up to the slot's bandwidth).


When using a device for distribution or core, it's more likely there will be more traffic flows between ports/cards. A fabric architecture allows, full data rate between fabric connections, so, for example, multiple ports on multiple cards can pass traffic without contending for shared bandwidth.


Architectures that use shared bandwidth, can cause ports/cards to queue for the bandwidth. The 6500 using its "classic bus" (i.e. not fabric) or a 3750/3750-E stack ring can easier oversubscribe shared bandwidth. To delay this, the shared bandwidth is of greater capacity, such as the 6500's classic bus providing 32 Gbps, or the 3750/3750-E providing 32 or 64 Gbps.


For an access device, assuming most traffic wants to transit the uplinks, the uplinks often become the first bottleneck and a bus or ring that provides more bandwidth then those is fine.

Actions

This Discussion