cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
22572
Views
30
Helpful
16
Replies

Questions about backplane bandwidth

weiler
Level 1
Level 1

Greetings all-

As a networking neophyte, I'm having a bit of trouble understanding some (probably) basic concepts on switching backplanes... I was hoping someone could enlighten me on a couple topics...

1: What's the difference between "forwarding rate" and "backplane bandwidth"? Are they connected?

2: What exactly do "fabric enabled" line cards get you in a 6500 series switch? I noticed that the fabric enabled line cards are twice the price of non-fabric enabled line cards... Does this mean that the line card shares the aggregate backplane bandwidth available on the SUP, thus enabling, say, a 48 port card (with GigE ports) to forward packets at "line speed" (i.e. all 48 ports spewing a gigabit of traffic simultaneously?

3: I notice that other switches out there say they have a higher forwarding rate than similar Cisco switches. For example, the HP Procurve 2810-48G claims to forward packets at 71.4 Mpps at 64 bytes while the 3750 with 48 10/100/1000 ports only forwards at 38.7 Mpps. The 3750 is like $15,000 or something, the HP is like $3,000. I know the 3750 has a few more features, but how can HP claim to forward faster than such an expensive Cisco equivalent (while sort-of equivalent)?

Thanks for any insight!

-erich

1 Accepted Solution

Accepted Solutions

Almost..

Its a SUP 720, not 750..

And the 48 port linecards ARE the dCEF 720 cards. dCEF 720 refers to one of the 4 families of linecards. It refers to capability.

Here is a GREAT link for you, Erich. Its all about what we are discussing now. You'll love it.

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper0900aecd80673385.html

By the way, you can rate my post if you think I was helpful. :-)

Victor

View solution in original post

16 Replies 16

lamav
Level 8
Level 8

Hi, Erich:

To answer your questions....

1. The backplane bandwidth refers to the bus bandwidth/speed available for communication between the line cards and the SUP module in a chassis-based switch, like the 6500. The classic Cisco backplane speed is 32Gbps.

The forwarding rate refers to the actual rate at which packets are forwarded by the line cards. Typically, that speed with, say, the SUP 720 with a classic line card is 15 Mpps system wide. If, however, a DFC daughter card (Distributed Forwarding Card) is deployed on a dCEF720 line card, the forwarding rate can be significantly increased to 400 Mpps per line card. The dCEF 256 line card will yield connectivity in the order of 256 Mbps backplane speed with a forwarding rate of 210 Mpps. In that case, an SFM switch fabric module must exist in the chassis. On the 720, it is built on board.

2. 'Fabric enabled' refers to a bus architecture augmentation in the form of a matrix. Cards that are fabric enabled have an extra socket in the back of them to plug into the fabric matrix. This allows a 6500 with, say, a SUP 720 and a dCEF 720 line card to have a bus speed of 720 Gbps, as opposed to the classic 32Gbps backplane speed when using classic cards. If dCEF256 line cards are used with the SFM module, the switch fabric cpeed will be 256 Mbps. The Cisco fabric enabled line cards are the 6700 series.

On a 6509, all slots are fabric enabled, as opposed to a 6513, which only has slots 9-13 as being fabric enabled.

3. I don't know anything about that HP switch. Sorry.

Hope my ramblings helped :-)

Joseph W. Doherty
Hall of Fame
Hall of Fame

1. "forwarding rate" is measured in packets (or frames?) per second, "backplane bandwidth" is measured in bits per second. Both are important concerning performance.

For backplane, assume you have just two 100 Mbps full duplex ports. To allow full wire rate to transit between the two ports, you would need 200 Mbps of bandwidth. Anything less and congestion could form.

Think what would happen if you had two network devices with 100 Mbps ports on both outsides ports but only 10 Mbps ports inside and between the two devices. I'm sure you can see you're going to be limited by the 10 Mbps on the inside. I.e. You might send 100 toward the first device but only 10 Mbps can come out the far side.

Think of combining the two devices as one device but your 10 Mbps link now becomes the backplane bandwidth.

Again to allow 100 Mbps through the device, we need at least 100 Mbps times 2 (remember full duplex) to support it.

As to packet forwarding, packets (frames really) delimit groups of bits on the wire. As each arrives, it takes some time to analyze what to do with the packet and how to direct it. Usually the time spent is the same regardless of size of the packet.

For Ethernet, 64 byte packets require about 149 Kpps to support 100 Mbps. 1518 byte packets require only about 8 Kpps to support 100 Mbps. If the packet forwarding rate is slower than required, the effective bandwidth would be reduced. E.g. 75 Kpps would only allow about 50 Mbps for 64 byte packets but would allow 100 Mbps for 128 byte or larger packets. (NB: for full duplex, you need to double the pps rate if you want to support full bandwidth in both directions.)

2) "fabric enabled" cards might provide more bandwidth to other cards within the chassis, especially aggregate, and may also offer distributed processing.

The 6500 chassis offers card slot connections for a 32 Gbps shared bus, or one or possibly two fabric channels to each card slot where the fabric channels are either 8 Gbps or 20 Gbps. "Fabric enabled" cards have one or two fabric channels connections, either the 8 Gbps or 20 Gbps type.

The fabric is supplied on either its own card (the older 8 Gbps channels - 256 Gbps total) or on the sup720 (the newer 20 Gbps channels - 720 Gbps total).

You ask about 48 port card with gig ports. Regardless of the number of cards in the chassis, using the bus, you're limited to 32 Gbps for the whole chassis.

If you have a 48 port card with dual 20 Gbps channels (and supporting chassis), it can push 40 Gbps (duplex) to the fabric which supports similar bandwidth to similar cards. So, 9 cards with dual 20s duplex can push 720 Gbps.

3) Unknown what all the HP you mention provides, but even though a 48 gig port 3750 is not a 64 byte wire rate devices, some of the "few more features" might be very significant. The 3750 is a L3 switch, not just L2, i.e. it can route. It provides a high speed stacking feature (compare with its sibling the 3560 which doesn't have that feature). Other features might also be unique to the Cisco in the areas of multicast, QoS, security, etc. (Oh, and the advanced features actually usually work correctly too.)

There often much more to the true worth of a switch or router then just it's raw performance specs.

Hi:

"For Ethernet, 64 byte packets require about 149 Kpps to support 100 Mbps. 1518 byte packets require only about 8 Kpps to support 100 Mbps."

Explain the math?

Thanks

Unfortunately, can't provide the specifics off the top of my head. However, they are based not on just 64 bytes = 512 bit divided into Ethernet bandwidth, but actually also take into account Ethernet framing overhead bits and inter-frame timing requirements. (Which is why they are lower than you might expect. Also why they vary at different frame sizes, overhead ratios change. Larger is more efficient, just like with IP frame overhead.)

For Ethernet 64 byte packets, the values are:

14,881 pps for 10 mbps

148,809 pps for 100 mbps

1.488095 mpps for 1 gbps

For some other Ethernet packet sizes, Cisco has a table (I think attached to 6500 performance analysis somewhere) as:

Gig

Packet Size (Bytes) .............. 64 128 256 512 1024 1518

Theoretical Maximum Kpps 1488 845 453 235 120 81

If you go searching the Internet, you'll should find the 64 byte Ethernet values all over. The other values are harder to find.

If you're still interested in the actual math, you can also find it on the Internet, but I don't have a link (sorry).

[EDIT]

Found Cisco table, see either table 1 or table 2 "Theoretical Maximum Kpps" in: http://www.cisco.com/en/US/products/hw/modules/ps2643/products_white_paper09186a0080091db8.shtml

Thanks, Joseph.

You're certainly welcome.

PS:

Just did some searching for the "math", found this: http://www.tamos.net/~rhay/wp/overhead/overhead.htm

[Edit]

Also found "math" in: http://archiv.tu-chemnitz.de/pub/2005/0075/data/CSR-04-04.pdf

(NB: Some other interesting calculations, such as effective payload bandwidth, especially with minimum sized packets.)

PS:

Forgot to mention pps rates of 6500 sups. The sup720 supports 15 Mpps for non-fabric and 30 Mpps for fabric cards. Sup32 supports 15 Mpps; fabric not supported.

Yes, pps rates for both slower than some 3750s. However, fabric cards often (all?) can have, usually optional, DFCs which provide local packet forwarding intelligence. The later fabric cards DFCs support 48 Mpps (again per card).

Also, generally many, many more features available within 6500 series than 3750 series. So, again, bandwidth and pps performance, alone, might not be the best selection criteria.

I think I'm starting to grasp this... So if I have this configuration:

6509 chassis

One SUP720-10G

Five 48 port GigE Fabric-Enabled Line Cards

(Do I need a dCEF-720 line card too?)

And I have cluster nodes plugged into every port on those five line cards, and every node on the cluster decided to spew 1Gb/s onto each port (to another node, for example), then I would get congestion on each card because the card itself can only push 20Gb/s outbound + 20Gb/s inbound (40Gb/s duplex)?

So in essence, there isn't really a way to fully eliminate congestion in the event of a total flooding of data to every port?

I do realize that's a lot of data. :) I'm just trying to imagine a best case scenario in a situation where infiniband/myrinet isn't available.

Nice explanation Joseph...

deserves rating 5 :-)

Narayan

Erich:

You're right, there is no way to provide a dedicated path on the switch's backplane for each port in a line card, hence the name, shared bus. And yes, if every port started sending traffic at it's maximum port speed all at once and continuously, you will experience congestion within the switch's architecture.

But that is expected, of course. It's referred to as oversubscription. And the idea of allowing more than a 1:1 oversubscription rate relies on a technology known as statistical multiplexing.

The underlying assumption is that you will NOT have every port sending traffic at its maximum capability all the time. Instead, there will be times when ports are idle or sending minimum traffic, like say the kind of traffic that maintains a session for an application (keepalive, so to speak).

And when we talk about idle time, we may be talking about the millisecond time frame, not minutes or hours.

As for forwarding rates, which is measured in packets per second, the speed at which the switch forwards packets depends on the shared bus speed, whether there exists a switch fabric for "express" communication between the diffferent line cards, as well as the manner in which the switch accesses forwarding information when a port receives a packet for processing and transmission.

When a packet is received on a switch port, the switch must determine how to forward it. The packet's header is forwarded to the SUP module via the shared bus for examination and processing. Based on the destination address, the switch will use the information in the routing table to make a forwarding decision regarding the output port.

Cisco has created a mechanism known as CEF (Cisco Express Forwarding), the information in this CEF table is a derivative of the L3 routing table. It is known as a FIB, or Forwarding Information Base. The purpose of the FIB and associated next hop table, which is derived from the ARP table, is to streamline the lookup and switching process. Instead of having to do a route table lookup (process switching) for every packet that the switch receives, it will rely on the CEF FIB, which is sort of an abridged routing table, to make the explanation simple. This streamlines the forwarding process considerably.

The reason we say that you can attain higher forwarding rates if you use a DFC (Distributed Forwarding Card for dCEF) is that the CEF FIB will be "downloaded" to each individual line card's ASIC (Application Specific Integrated Chip) hardware, instead of remaining centrally located on the SUP modules Policy Feature Card (PFC). So, that means the shared bus is relieved of the need to support the transmission of data up and down between the line cards and the SUP module.

If the destination port resides on the same switch, the switch fabric will provide a direct connection for faster forwarding.

Just think of the manner in which people in remote villages had to drive to a centrally located shopping area before shopping centers were built in their local area. Yes, there was a system of highways to support the traffic, but why clutter them with massive amounts of people-flow when they can just shop in their local area. They would get a lot more done that way, wouldnt they?

Victor

Thanks Victor!

So, to get maximum bandwidth and forwarding rate , assuming I put 5 48 port line cards in a 6509, I need:

1 SUP750-10G

1 dCEF-750 line card

5 48 port GigE fabric-enabled line cards

?

Almost..

Its a SUP 720, not 750..

And the 48 port linecards ARE the dCEF 720 cards. dCEF 720 refers to one of the 4 families of linecards. It refers to capability.

Here is a GREAT link for you, Erich. Its all about what we are discussing now. You'll love it.

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper0900aecd80673385.html

By the way, you can rate my post if you think I was helpful. :-)

Victor

My bad, I meant 720, not 750... Typo... :)

Thanks for the clarification (and the link!). From previous posts it almost sounded like the DFC was a 'separate' card from the actual 48 port line cards, thanks for clarifying.

Basically we're going to set up a 200 node cluster (each with 1 GigE ethernet interface) and have about 30-40 storage servers (each with 2 GigE ethernet interfaces using LACP to the switch) on the same subnet serving a parallel file system (all under linux). The parallel file system spreads up file fragments over all the 40 storage servers, then the cluster would, in thoery, hammer the file system servers for data constantly over GigE.

I'm sure you're correct that the ports will not be totally saturated all the time; I just am looking for a solution that will virtually eliminate the network as a possible bottleneck.

I'm not asking for consulting - just more of an understanding on what Cisco gear can do. Which you have done beautifully. Thanks so much!!

-erich

Erich,

DFC is actually a PFC (Policy feature card), the same PFC used on the supervisor engine. It caches all the L2-L3 tables locally which get downloaded on the PFC along with the compiled security and QOS Acl's.

You can use 6500's/Sup-720-10G with Virtual switching capacity for active-active load-sharing of the traffic. Have your 67XX line-cards equipped with DFC, which can be bought later if you dont want to go with it right from day 1. If you want to hookup the servers directly to these chassis, it should be fine for LACp as well.

If you want to use different switches for Servers connectivity, you can think of using 4948-10GE datacenter switches, which are line-rate with 96Gbps switching fabric. You can uplink these switches with 10-GE to the core switches.

HTH,

-amit singh

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Innovations in Cisco Full Stack Observability - A new webinar from Cisco