cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
933
Views
13
Helpful
10
Replies

Cat 4500 series product question

bkoum
Level 1
Level 1

hi

i would like to ask what the total bandwidth performance means for a supervisor engine . For example II plus 10 GE engine support 108Gbps , is this the total bandwidth that can aggregate for a lan ?

thanks

2 Accepted Solutions

Accepted Solutions

Joseph W. Doherty
Hall of Fame
Hall of Fame

The bandwidth your reading is what's offered by the fabric/backplane of the switch. Often switches will offer less internal bandwidth than all the connection ports could use.

e.g.

A switch with 48 gig ports would require 96 Gbps of bandwidth (2x for duplex traffic).

The original 4500 chassis provides 6 Gbps per line card slot. A 4506 or 4607R each have 5 card slots, (6 Gbps * 5 slots * 2 duplex =) 60 Gbps. The dual 10 gigs is 40 Gbps. The 4 gig ports on the sup2+-10GE is another 8 Gbps. This combined 108 Gbps corresponds to the values seen in Table 1 in: http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps4324/product_data_sheet0900aecd80356bde.html

View solution in original post

Jon Marshall
Hall of Fame
Hall of Fame

Hi

It means that the switch fabric can support up to 108Gbps of throughput. It's like the Sup720 being able to support up to 720Gbps switch fabric.

How much throughput you get on the switch is totally dependant on which linecards you use. If the total throughput of your linecards does not exceed 108Mbps and each individual linecard does not exceed it's allocated bandwidth then you are non-blocking on the switch fabric.

HTH

Jon

View solution in original post

10 Replies 10

Joseph W. Doherty
Hall of Fame
Hall of Fame

The bandwidth your reading is what's offered by the fabric/backplane of the switch. Often switches will offer less internal bandwidth than all the connection ports could use.

e.g.

A switch with 48 gig ports would require 96 Gbps of bandwidth (2x for duplex traffic).

The original 4500 chassis provides 6 Gbps per line card slot. A 4506 or 4607R each have 5 card slots, (6 Gbps * 5 slots * 2 duplex =) 60 Gbps. The dual 10 gigs is 40 Gbps. The 4 gig ports on the sup2+-10GE is another 8 Gbps. This combined 108 Gbps corresponds to the values seen in Table 1 in: http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps4324/product_data_sheet0900aecd80356bde.html

Jon Marshall
Hall of Fame
Hall of Fame

Hi

It means that the switch fabric can support up to 108Gbps of throughput. It's like the Sup720 being able to support up to 720Gbps switch fabric.

How much throughput you get on the switch is totally dependant on which linecards you use. If the total throughput of your linecards does not exceed 108Mbps and each individual linecard does not exceed it's allocated bandwidth then you are non-blocking on the switch fabric.

HTH

Jon

I believe josephdoherty and jon are pioneers in this ;)

Interesting post. but it's not all clear yet.

I've the following data:

Backplane C4506-E: 120 Gbps full duplex

Sup II-Plus-10GE in a Cisco Catalyst 4506/4506-E Chassis supports 108 Gbps/81 mpps

Does this mean that total usable backplane bandwidth is restricted to 108Gbps or is it 120Gbps as only the first packet needs to go to the supervisor if not using dCef.

4506E supports 6Gbps(B) (classic linecard) or 24Gbps(B)(E-series line card) per slot.

How does this relates to the full bandwidth?

Or do I have to interpret it like this:

Chassis has unlimited bandwidth (suppose unlimited ;-), to make things easier). If you use SupII+-10GE with this chassis you can use up to 108Gbps.

Let's say I've 5 classic linecards. (max bandwidth usage: 5x6 = 30Gbps. So which means that 78Gbps can be allocated for my uplinks on the supervisor.

SupII+-10GE has 2 Tengig and 4 Gig connections. This means that the theoretically max bandwidth for full duplex can be allocated (48 Gbps) for my uplinks.

48Gbps = 2x10gig x2(full-duplex) + 4x1gig x 2(full-duplex).

By the way, in this context, what is meant with x Gigabit- non-blocking? > no oversubscription?

What if I've 5 E series card(24Gbps). Can I still profit of my 10 Gig connections on the supervisor?? For example for very high speed interswitch traffic.

About oversubscription:

To make it easy: 24 gigports with E series card (24Gbps). > is seen as 1:1. But doesn't have this to be 1:2 as stated for full-duplex = 24 ports x 2= 48 gigabits per seconds

My concern of this whole story is knowing if it's better to connect all uplinks (5x 1 gig -> 4+ 1 via twingig) to the supervisor module or if I better divide some uplinks over the linecards?

situation 1:linecards 6Gbps

situation 2:linecards 24Gbps

Thus speed sup uplinks vs linecard.

I know many questions, but I find this pretty difficult. I've read already some topics about this on netpro... I'm almost hopeless on this domain I believe.

Yes, quite a few questions. Instead of answering them specifically, let me try some general explanations which might help with understanding the specs.

On the 4500 series, the non -E models provide 6 Gbps per slot and the -E models provide 24 Gbps slot. This bandwidth is the maximum that can be used for communication to/from the rest of the chassis. This bandwidth might be more than necessary for a particular card, exactly what a card needs, or oversubscribed. Examples of these three cases would be (on non -E), WS-X4148-RJ (4.8 Gbps), WS-X4306-GB (6 Gbps), and WS-X4448-GB-RJ45 (48 Gbps).

Although the chassis slots have a spec for their bandwidth, somehow this bandwidth has to be actively connected together. Within the 4500, the supervisor provides a "fabric" that the slot's bandwidth utilizes. Like card slot bandwidth, the actual fabric bandwidth might not support all the bandwidth the chassis could, exactly the needed bandwidth, or more than the chassis needs. One point of confusion, fabric bandwidth is often quoted for duplex (e.g. 100 Mbps duplex port would rate 200 Mbps fabric).

If we used the 4506, we see it has 5 card slots (the 6th is used by the supervisor). Again for a non -E model, since each slot supports 6 Gbps, that's 30 Gbps, so full fabric bandwidth required would be 60 Gbps for the card slots. The supervisor might also provide its own ports, which would add to the fabric bandwidth needed. For example, the Supervisor II-Plus-10GE's 4GE and 2 10GbE, would require another 48 Gbps of fabric bandwidth (60 + 48 = 108 Gbps [NB: spec value for this supervisor]).

Lastly, forwarding packets, for minimum sized Ethernet packets, requires (about) 1.488095 Mpps for each 1 Gbps. For the bandwidth value, take half of the fabric bandwidth, which for our 108 Gbps, would be 54 Gbps * 1.488095 = 80.35713 Mpps (NB: spec value for Supervisor II-Plus-10GE's, 81 Mpps).

Once you've determined overall capacity of the device, you often have to note the fine print. Do the supervisor ports support "line rate", "wire speed", full fabric bandwidth? How is slot bandwidth divided between multiple ports? An interesting example, the WS-X4418-GB which has two full port bandwidth ports to fabric, but others are 4:1.

PS:

Non-blocking for a fabric should mean the fabric's bandwidth isn't oversubscribed for the connections to the fabric. (NB: This alone doesn't guarantee traffic won't queue since multiple fabric connections sending to the same fabric connection will oversubscribe it.)

Great!

One correction maybe:

On the 4500 series, the non -E models provide 6 Gbps per slot and the -E models provide 24 Gbps slot.

I thought for this it was like this:

On 4506E:

6Gbps if you use a classic line card and

24Gbps if you use an "E"nhanced linecard.

All things you wrote clarifies a lot.

One more question:

Thus I can conclude that Supervisor II-Plus-10GE is fully optimized for using it's own ports (2x10 gig and 4x1gig)and 5 classic line cards with a bandwidth of 6Gbps.

My only question left is what if 24Gbps line cards are used? > 5x24Gbpsx2 = 240Gbps. Probably it's not the goal to use 5 line cards of 24Gbps. Will the supervisor ports always be fully used and the remaining bandwidth for the slots?

I know that the situation where all slots are working at max speed is quite low ;-)

Anyway, I'll keep this text with me for later reference ;-)

One extra question for above.

If for example a Ws-X4306-Gb module is present. Does it make difference if you use a port of a Ws-X4306-Gb instead of supervisor uplink port. And what do you advise to use.

"One correction maybe:

On the 4500 series, the non -E models provide 6 Gbps per slot and the -E models provide 24 Gbps slot.

I thought for this it was like this:

On 4506E:

6Gbps if you use a classic line card and

24Gbps if you use an "E"nhanced linecard. "

Correction? I agree it would have been clearer if I had written "On the 4500 series, the non -E models provide a maximum of 6 Gbps per slot and the -E models provide a maximum of 24 Gbps slot."

What you wrote is incorrect only in the sense, besides the chassis and line card, the supervisor is critical too. The non-E chassis models only support up to 6 Gbps slot, the newer -E models can work the same as non -E models but to obtain up to 24 Gbps slot, you need the newer -E chassis, the newer -E line cards, and the newer Supervisor 6-E.

"If for example a Ws-X4306-Gb module is present. Does it make difference if you use a port of a Ws-X4306-Gb instead of supervisor uplink port. And what do you advise to use."

From a performance standpoint it shouldn't matter. From a connection point, it may since the WS-X4306-GB uses GBIC. Or, if you wanted to use copper gig for uplinks, it might be less expensive using only 6 ports of a WS-X4424-GB-RJ45 to also support maximum performance.

The fact that you need a SUP6-E for using 24Gbps makes sense ;) (If you calculate total bandwidth for SUP2E-Ge-10+ ->108Gbps)

That was what I didn't understand.

If SFPs/twingig converters are not a problem I'll use the supervisor ports for uplinks.

No need to use other blades for uplinks (from a redundancy aspect). or what exactly happens if the only sup fails? A chassis without any functionality? reduced functionality? (eg. normal basic switching will still occur such as a trunk uplink)

Based on this answer I know if it matters to divide uplinks over an extra blade with enough capacity(WS-X4306-GB) or not

If the only supervisor fails, so does the whole chassis. That's why there are 45xxR models. The latter accept a second supervisor which takes over if the first fails.

"Based on this answer I know if it matters to divide uplinks over an extra blade with enough capacity(WS-X4306-GB) or not "

Only makes sense if you want to avoid loss of all uplinks with loss of module.

PS:

You're likely to ask, what happens with dual supervisors, if using their uplinks? Not sure without some research for 4500s. Sometimes supervisor links can only be used with active supervisor, sometimes other supervisors links can be used on inactive supervisor (assuming it hasn't failed.)

:)

It helped me a lot ;) I asked more than I had to know for this but it will be usefull in the future. I did some research on my own on CCO but could found this information you gave me. Very usefull!

But I realize that I only know a little of the different switching architectures and the real physical switching (with buffers etc..). But I'll learn it when I gain more and more experience.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco