Questions about backplane bandwidth

Answered Question

Greetings all-

As a networking neophyte, I'm having a bit of trouble understanding some (probably) basic concepts on switching backplanes... I was hoping someone could enlighten me on a couple topics...

1: What's the difference between "forwarding rate" and "backplane bandwidth"? Are they connected?

2: What exactly do "fabric enabled" line cards get you in a 6500 series switch? I noticed that the fabric enabled line cards are twice the price of non-fabric enabled line cards... Does this mean that the line card shares the aggregate backplane bandwidth available on the SUP, thus enabling, say, a 48 port card (with GigE ports) to forward packets at "line speed" (i.e. all 48 ports spewing a gigabit of traffic simultaneously?

3: I notice that other switches out there say they have a higher forwarding rate than similar Cisco switches. For example, the HP Procurve 2810-48G claims to forward packets at 71.4 Mpps at 64 bytes while the 3750 with 48 10/100/1000 ports only forwards at 38.7 Mpps. The 3750 is like $15,000 or something, the HP is like $3,000. I know the 3750 has a few more features, but how can HP claim to forward faster than such an expensive Cisco equivalent (while sort-of equivalent)?

Thanks for any insight!


Correct Answer by lamav about 9 years 2 months ago


Its a SUP 720, not 750..

And the 48 port linecards ARE the dCEF 720 cards. dCEF 720 refers to one of the 4 families of linecards. It refers to capability.

Here is a GREAT link for you, Erich. Its all about what we are discussing now. You'll love it.

By the way, you can rate my post if you think I was helpful. :-)


  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (5 ratings)
lamav Sun, 02/17/2008 - 15:59
User Badges:
  • Blue, 1500 points or more

Hi, Erich:

To answer your questions....

1. The backplane bandwidth refers to the bus bandwidth/speed available for communication between the line cards and the SUP module in a chassis-based switch, like the 6500. The classic Cisco backplane speed is 32Gbps.

The forwarding rate refers to the actual rate at which packets are forwarded by the line cards. Typically, that speed with, say, the SUP 720 with a classic line card is 15 Mpps system wide. If, however, a DFC daughter card (Distributed Forwarding Card) is deployed on a dCEF720 line card, the forwarding rate can be significantly increased to 400 Mpps per line card. The dCEF 256 line card will yield connectivity in the order of 256 Mbps backplane speed with a forwarding rate of 210 Mpps. In that case, an SFM switch fabric module must exist in the chassis. On the 720, it is built on board.

2. 'Fabric enabled' refers to a bus architecture augmentation in the form of a matrix. Cards that are fabric enabled have an extra socket in the back of them to plug into the fabric matrix. This allows a 6500 with, say, a SUP 720 and a dCEF 720 line card to have a bus speed of 720 Gbps, as opposed to the classic 32Gbps backplane speed when using classic cards. If dCEF256 line cards are used with the SFM module, the switch fabric cpeed will be 256 Mbps. The Cisco fabric enabled line cards are the 6700 series.

On a 6509, all slots are fabric enabled, as opposed to a 6513, which only has slots 9-13 as being fabric enabled.

3. I don't know anything about that HP switch. Sorry.

Hope my ramblings helped :-)

Joseph W. Doherty Sun, 02/17/2008 - 17:23
User Badges:
  • Super Bronze, 10000 points or more

1. "forwarding rate" is measured in packets (or frames?) per second, "backplane bandwidth" is measured in bits per second. Both are important concerning performance.

For backplane, assume you have just two 100 Mbps full duplex ports. To allow full wire rate to transit between the two ports, you would need 200 Mbps of bandwidth. Anything less and congestion could form.

Think what would happen if you had two network devices with 100 Mbps ports on both outsides ports but only 10 Mbps ports inside and between the two devices. I'm sure you can see you're going to be limited by the 10 Mbps on the inside. I.e. You might send 100 toward the first device but only 10 Mbps can come out the far side.

Think of combining the two devices as one device but your 10 Mbps link now becomes the backplane bandwidth.

Again to allow 100 Mbps through the device, we need at least 100 Mbps times 2 (remember full duplex) to support it.

As to packet forwarding, packets (frames really) delimit groups of bits on the wire. As each arrives, it takes some time to analyze what to do with the packet and how to direct it. Usually the time spent is the same regardless of size of the packet.

For Ethernet, 64 byte packets require about 149 Kpps to support 100 Mbps. 1518 byte packets require only about 8 Kpps to support 100 Mbps. If the packet forwarding rate is slower than required, the effective bandwidth would be reduced. E.g. 75 Kpps would only allow about 50 Mbps for 64 byte packets but would allow 100 Mbps for 128 byte or larger packets. (NB: for full duplex, you need to double the pps rate if you want to support full bandwidth in both directions.)

2) "fabric enabled" cards might provide more bandwidth to other cards within the chassis, especially aggregate, and may also offer distributed processing.

The 6500 chassis offers card slot connections for a 32 Gbps shared bus, or one or possibly two fabric channels to each card slot where the fabric channels are either 8 Gbps or 20 Gbps. "Fabric enabled" cards have one or two fabric channels connections, either the 8 Gbps or 20 Gbps type.

The fabric is supplied on either its own card (the older 8 Gbps channels - 256 Gbps total) or on the sup720 (the newer 20 Gbps channels - 720 Gbps total).

You ask about 48 port card with gig ports. Regardless of the number of cards in the chassis, using the bus, you're limited to 32 Gbps for the whole chassis.

If you have a 48 port card with dual 20 Gbps channels (and supporting chassis), it can push 40 Gbps (duplex) to the fabric which supports similar bandwidth to similar cards. So, 9 cards with dual 20s duplex can push 720 Gbps.

3) Unknown what all the HP you mention provides, but even though a 48 gig port 3750 is not a 64 byte wire rate devices, some of the "few more features" might be very significant. The 3750 is a L3 switch, not just L2, i.e. it can route. It provides a high speed stacking feature (compare with its sibling the 3560 which doesn't have that feature). Other features might also be unique to the Cisco in the areas of multicast, QoS, security, etc. (Oh, and the advanced features actually usually work correctly too.)

There often much more to the true worth of a switch or router then just it's raw performance specs.

lamav Sun, 02/17/2008 - 17:42
User Badges:
  • Blue, 1500 points or more


"For Ethernet, 64 byte packets require about 149 Kpps to support 100 Mbps. 1518 byte packets require only about 8 Kpps to support 100 Mbps."

Explain the math?


Joseph W. Doherty Sun, 02/17/2008 - 18:21
User Badges:
  • Super Bronze, 10000 points or more

Unfortunately, can't provide the specifics off the top of my head. However, they are based not on just 64 bytes = 512 bit divided into Ethernet bandwidth, but actually also take into account Ethernet framing overhead bits and inter-frame timing requirements. (Which is why they are lower than you might expect. Also why they vary at different frame sizes, overhead ratios change. Larger is more efficient, just like with IP frame overhead.)

For Ethernet 64 byte packets, the values are:

14,881 pps for 10 mbps

148,809 pps for 100 mbps

1.488095 mpps for 1 gbps

For some other Ethernet packet sizes, Cisco has a table (I think attached to 6500 performance analysis somewhere) as:


Packet Size (Bytes) .............. 64 128 256 512 1024 1518

Theoretical Maximum Kpps 1488 845 453 235 120 81

If you go searching the Internet, you'll should find the 64 byte Ethernet values all over. The other values are harder to find.

If you're still interested in the actual math, you can also find it on the Internet, but I don't have a link (sorry).


Found Cisco table, see either table 1 or table 2 "Theoretical Maximum Kpps" in:

lamav Sun, 02/17/2008 - 18:37
User Badges:
  • Blue, 1500 points or more

Thanks, Joseph.

Joseph W. Doherty Sun, 02/17/2008 - 18:56
User Badges:
  • Super Bronze, 10000 points or more

You're certainly welcome.


Just did some searching for the "math", found this:


Also found "math" in:

(NB: Some other interesting calculations, such as effective payload bandwidth, especially with minimum sized packets.)

Joseph W. Doherty Sun, 02/17/2008 - 19:25
User Badges:
  • Super Bronze, 10000 points or more


Forgot to mention pps rates of 6500 sups. The sup720 supports 15 Mpps for non-fabric and 30 Mpps for fabric cards. Sup32 supports 15 Mpps; fabric not supported.

Yes, pps rates for both slower than some 3750s. However, fabric cards often (all?) can have, usually optional, DFCs which provide local packet forwarding intelligence. The later fabric cards DFCs support 48 Mpps (again per card).

Also, generally many, many more features available within 6500 series than 3750 series. So, again, bandwidth and pps performance, alone, might not be the best selection criteria.

I think I'm starting to grasp this... So if I have this configuration:

6509 chassis

One SUP720-10G

Five 48 port GigE Fabric-Enabled Line Cards

(Do I need a dCEF-720 line card too?)

And I have cluster nodes plugged into every port on those five line cards, and every node on the cluster decided to spew 1Gb/s onto each port (to another node, for example), then I would get congestion on each card because the card itself can only push 20Gb/s outbound + 20Gb/s inbound (40Gb/s duplex)?

So in essence, there isn't really a way to fully eliminate congestion in the event of a total flooding of data to every port?

I do realize that's a lot of data. :) I'm just trying to imagine a best case scenario in a situation where infiniband/myrinet isn't available.

royalblues Mon, 02/18/2008 - 05:59
User Badges:
  • Green, 3000 points or more

Nice explanation Joseph...

deserves rating 5 :-)


lamav Mon, 02/18/2008 - 07:31
User Badges:
  • Blue, 1500 points or more


You're right, there is no way to provide a dedicated path on the switch's backplane for each port in a line card, hence the name, shared bus. And yes, if every port started sending traffic at it's maximum port speed all at once and continuously, you will experience congestion within the switch's architecture.

But that is expected, of course. It's referred to as oversubscription. And the idea of allowing more than a 1:1 oversubscription rate relies on a technology known as statistical multiplexing.

The underlying assumption is that you will NOT have every port sending traffic at its maximum capability all the time. Instead, there will be times when ports are idle or sending minimum traffic, like say the kind of traffic that maintains a session for an application (keepalive, so to speak).

And when we talk about idle time, we may be talking about the millisecond time frame, not minutes or hours.

As for forwarding rates, which is measured in packets per second, the speed at which the switch forwards packets depends on the shared bus speed, whether there exists a switch fabric for "express" communication between the diffferent line cards, as well as the manner in which the switch accesses forwarding information when a port receives a packet for processing and transmission.

When a packet is received on a switch port, the switch must determine how to forward it. The packet's header is forwarded to the SUP module via the shared bus for examination and processing. Based on the destination address, the switch will use the information in the routing table to make a forwarding decision regarding the output port.

Cisco has created a mechanism known as CEF (Cisco Express Forwarding), the information in this CEF table is a derivative of the L3 routing table. It is known as a FIB, or Forwarding Information Base. The purpose of the FIB and associated next hop table, which is derived from the ARP table, is to streamline the lookup and switching process. Instead of having to do a route table lookup (process switching) for every packet that the switch receives, it will rely on the CEF FIB, which is sort of an abridged routing table, to make the explanation simple. This streamlines the forwarding process considerably.

The reason we say that you can attain higher forwarding rates if you use a DFC (Distributed Forwarding Card for dCEF) is that the CEF FIB will be "downloaded" to each individual line card's ASIC (Application Specific Integrated Chip) hardware, instead of remaining centrally located on the SUP modules Policy Feature Card (PFC). So, that means the shared bus is relieved of the need to support the transmission of data up and down between the line cards and the SUP module.

If the destination port resides on the same switch, the switch fabric will provide a direct connection for faster forwarding.

Just think of the manner in which people in remote villages had to drive to a centrally located shopping area before shopping centers were built in their local area. Yes, there was a system of highways to support the traffic, but why clutter them with massive amounts of people-flow when they can just shop in their local area. They would get a lot more done that way, wouldnt they?


Correct Answer
lamav Mon, 02/18/2008 - 08:41
User Badges:
  • Blue, 1500 points or more


Its a SUP 720, not 750..

And the 48 port linecards ARE the dCEF 720 cards. dCEF 720 refers to one of the 4 families of linecards. It refers to capability.

Here is a GREAT link for you, Erich. Its all about what we are discussing now. You'll love it.

By the way, you can rate my post if you think I was helpful. :-)


My bad, I meant 720, not 750... Typo... :)

Thanks for the clarification (and the link!). From previous posts it almost sounded like the DFC was a 'separate' card from the actual 48 port line cards, thanks for clarifying.

Basically we're going to set up a 200 node cluster (each with 1 GigE ethernet interface) and have about 30-40 storage servers (each with 2 GigE ethernet interfaces using LACP to the switch) on the same subnet serving a parallel file system (all under linux). The parallel file system spreads up file fragments over all the 40 storage servers, then the cluster would, in thoery, hammer the file system servers for data constantly over GigE.

I'm sure you're correct that the ports will not be totally saturated all the time; I just am looking for a solution that will virtually eliminate the network as a possible bottleneck.

I'm not asking for consulting - just more of an understanding on what Cisco gear can do. Which you have done beautifully. Thanks so much!!


Amit Singh Mon, 02/18/2008 - 09:14
User Badges:
  • Cisco Employee,


DFC is actually a PFC (Policy feature card), the same PFC used on the supervisor engine. It caches all the L2-L3 tables locally which get downloaded on the PFC along with the compiled security and QOS Acl's.

You can use 6500's/Sup-720-10G with Virtual switching capacity for active-active load-sharing of the traffic. Have your 67XX line-cards equipped with DFC, which can be bought later if you dont want to go with it right from day 1. If you want to hookup the servers directly to these chassis, it should be fine for LACp as well.

If you want to use different switches for Servers connectivity, you can think of using 4948-10GE datacenter switches, which are line-rate with 96Gbps switching fabric. You can uplink these switches with 10-GE to the core switches.


-amit singh

Joseph W. Doherty Mon, 02/18/2008 - 08:33
User Badges:
  • Super Bronze, 10000 points or more

With the 20 Gbps dual channel fabric cards, you have a total of 80 Gbps (duplex), but yes 48 gig ports could congest against the fabric channels.

In practice, very unlikely you're going to generate that much substained traffic, but if it's a concern, only use 40 ports. (I believe the channels are divided into banks of 24, so you could uses 20 ports of the first 24 and 20 ports of the second 24.)

If you look at the specs of the 8 port 10 gig card, it's oversubscribed 2:1. There's actually a new command to deactive every other port to conform with the fabric bandwidth.

If you really want to be able to push 5 such 48 gig ports cards at nearly wire rate, and expecially depending on your packets sizes, you likely need DFC on the cards. This would be such as the 6748 with DFC.

More info on the 6500 10/100/1000 cards found here:

lamav Mon, 02/18/2008 - 13:16
User Badges:
  • Blue, 1500 points or more


You're quite welcome...good luck!


This Discussion