More questions about 6509 backplane, fabric and throughpu

Unanswered Question
Jul 29th, 2009

1.

When talking about the 6509, when we say backplane and switch fabric, are we talking about the same thing, or is there a difference?

The documentation seems to use these interchangably.

2.

According to this document,

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_qas09186a0080159963.pdf

Table 2 is showing Classic series modules as having 16Gbps bandwidth "shared".

The Q/A just above that table shows that the 720 provides two 20G connections to each module slot.

Would this be a total throughput potential of 10Gbps full duplex per connection?

3.

Does this also mean that even though the backplane is capable of much higher speeds, if I had all "classic" modules in the slots, all of them would collectively share a single 16Gbps to the switch fabric?

So, for example if I had the Sup720 in slot 6 and the rest of the slots were all a mixture of classic etherent modules, the maximum throughput I could achieve on all of them together to the switch fabric is 16Gbps?

4.

according to this document,

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/product_data_sheet0900aecd8017376e.html

Table 2 is showing an individual Classic Interface Module as having 32Gbps per system, is this table showing full duplex while the above document is not?

5.

Using the below hardware config and the above information:

Switching resources: Module....Part number Series CEF mode

2........WS-X6408A-GBIC...........classic...........CEF

3........WS-X6348-RJ-45...........classic...........CEF

4........WS-X6348-RJ-45...........classic...........CEF

5........WS-X6348-RJ-45...........classic...........CEF

6........WS-SUP720-BASE........supervisor...........CEF

7........WS-X6348-RJ-45...........classic...........CEF

8........WS-X6548-GE-TX...........CEF256............CEF

9........WS-X6316-GE-TX...........classic...........CEF

This shows with all of these modules, the maximum throughput from the ports to the switch fabric is a total of 24Gbps?

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (7 ratings)
Loading.
Edison Ortiz Wed, 07/29/2009 - 11:37

1.

From http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper0900aecd80673385.html

The Cisco Catalyst 6500 incorporates two backplanes. From its initial release in 1999, the Cisco Catalyst 6500 chassis has supported a 32-Gbps shared switching bus, a proven architecture for interconnecting line cards within the chassis. The Cisco Catalyst 6500 chassis also includes a second backplane that allows line cards to connect over a high-speed switching path into a crossbar switching fabric

2.

Yes, all values reflects full-duplex so is actually 10G each way.

3.

Correct, they will utilize another portion of the backplane, not the fabric.

4.

Classic line cards share the 32Gbps backplane.

5.

Using the listed components they will all shared a single 32Gbps backplane speed.

HTH,

__

Edison.

wilson_1234_2 Wed, 07/29/2009 - 12:28

Edison, per this document:

http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_qas09186a0080159963.html

Table 2 shows the classic modules collectively using the 16Gbps shared connection, and the CEF256 modules using 8Gbps dedicated per slot.

Since I have one of the CEF256 modules and the rest Classic, wouldn't I combine the 16G shared and the 8G deicated for a total of 24Gbps even though they are getting switched on the different backplanes?

Either way, I am not using very much of the capablity of this switch with this array of modules.

The SUP720 should be able to handle much more correct?

If that is the case, I see the processor average around 20%, but it does spike up to 90-100% sometimes.

Wouldn't this switch be lightly loaded?

Edison Ortiz Wed, 07/29/2009 - 13:04

You are right, I missed the CEF256 from the list, it was a long thread to read :)

Sup720 is able to handle much more, yes.

You need to verify what's causing the spike by viewing the list of processes from the 'show proc cpu' output.

HTH,

__

Edison.

bilousand Thu, 07/30/2009 - 05:10

Platform with mixed classic and fabric-enabled modules will operate in truncated mode, in which classic cards will send entire frame over shared bus and fabric-enabled modules will send truncated frames (32 bytes header + 64 bytes payload needed for L3 forwarding decision - hence the name truncated). In this scenario fabric will be used only to replicate payload of the frame and only to fabric-enabled card (in your case only one).

You might want to look into

show fabric status

show fabric utilization

show fabric switching-mode

show platform hardware capacity fabric

In any case - as 6500 doing forwarding decision in hardware the load of packets which should transit it (be forwarded) shouldn't have any impact on CPU up to the point where TCAMs are exhausted or traffic is targeted to CPU (as you haven't provided the info which CPU is it I assume it's RP).

Check TCAMs:

show platform hardware capacity

If TCAMs are ok, then it's a control plane traffic causing CPU load spikes such as routing updates (flapping links for example), frequent SNMP bulk queries and so on, in which case troubleshooting and solving the problem will be more complicated.

Jon Marshall Wed, 07/29/2009 - 11:41

Richard

1) Pretty much yes to all intents and purposes altho i tend to use the term switch fabric more often

2) Firstly when Cisco say a 6509 supplies 720Gbps throughput they are doubling up the figures. So as an example lets say you have

sup720 + 8 x 6748-GE-TX modules. Each 6748 module has 2 x 20Gbps connection to switch fabric.

So the maths is as follows -

2 x 20 = 40

40 * 8 (6748 modules) = 320

+ 40 for the supervisor itself

320 + 40 = 360.

Then to get full duplex figure simply * 2 -

360 * 2 = 720Gbps.

So a 6509 fully populated with modules that support 2 x 20Gbps connection to switch fabric will give you 720Gbps full duplex.

The 16Mbps shared bus is more often referred to in it's full duplex mode ie. 32Gbps shared bus.

3) Yes it does. Classic modules do not support a dedicated connection to the switch fabric so they would all share the 32Gbps shared bus.

4) Yes - see 2) for more details.

5) I know the 6548 has an 8gbps connection to switch fabric - i'm assuming all the others are classic line cards ? - apologies being a bit lazy by not looking them up.

If so all the classic line cards will share the 32Gbps bus. The 6548 will get it's own dedicated 8Gbps channel or 8 * 2 - 16 for full duplex.

So yes, without full duplex figures you have 16 + 8 = 24. With full duplex you have 32 + 16 = 48.

This is why choosing the right line cards/supervisor is important in terms of throughput and cost.

Edit - just noticed the obvious from your output ie. they are all classic except for 6548 which is CEF256 :-)

Jon

wilson_1234_2 Wed, 07/29/2009 - 12:53

Thanks Jon,

it looks like you naswered some of the follow up questions I had, but

Either way, I am not using very much of the capablity of this switch with this array of modules.

The SUP720 should be able to handle much more correct?

If that is the case, I see the processor average around 20%, but it does spike up to 90-100% sometimes.

Wouldn't this switch be lightly loaded?

Joseph W. Doherty Wed, 07/29/2009 - 18:03

Edison and Jon, I believe have well addressed your questions, but perhaps I might be able to provide some additional information or further clarification.

#1 Although backplane and switch fabric are used almost interchangably, especially for a chassis switch, backplane often was/is a set of connections that interconnect edge cards. Fabric generally denotes some kind of crossbar architecture that allows one fabric edge connection to have dedicated bandwidth to another fabric edge connection.

In figure #2 of Edison's reference, you see the diagram label includes "Blackplane", but the diagram shows both "Switch Fabric" and "Shared Bus" slot connections. Both are backplane connections, but one set supports a fabric architecture the other set a shared bus architecture. (In principle, one set of connections might support either type of architecture, somewhat as the 6500 fabric connections support either 8 or 20 Gbps, but it's not the case for the 6500.)

BTW, the sup720 provides the fabric on the supervisor card, but for the earlier sup2, fabric was provided on a separate (optional) switch fabric module card (256 GB).

#2 Believe one 20 Gbps channel provides 40 Gbps (duplex), not 10 Gbps that's 20 Gbps (duplex).

#3 and #4 Documentation can be confusing because believe bus architectures don't have duplex. (They work somewhat similar to shared Ethernet).

If you consult the section "Cisco Catalyst 6500 Architecture: Shared Bus" in Edison's references, it describes (routine) bus architecture operation and 32 Gbps for the DBus. The 16 Gbps references, I suspect (since I've found they seem much fewer than 32 Gbps references), might indicate some confusion whether bandwidth is duplex or not (again usually not for bus architecture) and/or something from prior 6000 architecture.

#5 All your line cards only connect to the shared bus except for the 6548 which also has one 8 Gbps fabric connection. In your described configuration, all cards would communicate across the shared bus although the 6548 could pass data to/from the sup720 ports across the fabric. Except for that, your maximum throughput is limited to the shared 32 Gbps.

BTW, for maximum throughput, you also need to consider forwarding performance, which I believe for your configuration is 15 Mpps. For minimum sized Ethernet packets, this would only support about 10 Gbps. However, assuming PPS rate holds for larger packets, the 15 Mpps could support 32 Gbps at about 256 bytes or larger per packet.

[edit]

BTW, is it just me that sees Richard's follow on posts as blank? (NB: I see what looks like start of the text in outline mode.)

wilson_1234_2 Thu, 07/30/2009 - 05:17

As always thanks to all of you.

No problem Edison, my questions are usually long. I appreciate all of you guys, you have been a great help over the years.

Off topic a little, but when reading these documents, I never seem to get a complete understanding until I ask you guys specific questions, then it usually clicks (although a little embarrassing sometimes to have to keep asking until I get it).

But, others can learn as well.

Jon Marshall Thu, 07/30/2009 - 05:34

Richard

"Off topic a little, but when reading these documents, I never seem to get a complete understanding until I ask you guys specific questions, then it usually clicks (although a little embarrassing sometimes to have to keep asking until I get it)."

Can't speak for the others but in my case it's a combination of reading docs and practical experience rather than just reading the docs. 6500 switches have a myriad of options and when you have to specify kit lists for new installations you tend to want to make sure that what you are buying will actually do the job eg.

i have been at one installation at a data centre in the middle of the night when the entire upgrade was postponed because the wrong fans had been ordered for the 6500s. Very frustrating and a bit embarrassing to be honest :-)

Jon

Jon Marshall Thu, 07/30/2009 - 05:42

"BTW, is it just me that sees Richard's follow on posts as blank? (NB: I see what looks like start of the text in outline mode.)"

Yes, just you :-)

wilson_1234_2 Thu, 07/30/2009 - 06:49

Has anyone done any backplane traffic monitoring using the

"monitor traffic-util backplane"?

Is it ok to use that on the fly without any negative affects?

This switch is not any where near what the SUP720 is capable of with the modules installed.

I have locked up a core switch by doing debugs before and would rather not do anything like that again(very embarrassing, it had to be rebooted in the middle of the day).

Actions

This Discussion