Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

DCef720 Throughput calculations - Idiot Check Request


I am just crunching some numbers for a document I need to write, and though I would ask you guys to check these figures with me.

Concerning a 6500 with sup720, equipped with dcef720 line cards. Now, I'm fairly familiar with the 6500 but have never looked that deeply into the dcef performance until I have had to put this down on paper.

Ok, Cisco give the performance of a dcef720 card at 48mpps per slot, which gives the following throughput figures..

48mpps @ 64byte packet = (48,000,000 * 512bits)/1,000,000,000 = 24.576 Gig bits per second

Lets assume for the moment, that as for compact mode, throughput is independent of packet size (I cant find anything to specify if it is or isnt in dcef mode. I cant see how it can be given the buffering which must need to be done to reconsile the following, but perhaps someone can tell me), then 1500 byte packets gives...

48mpps @ 1500byte packet = (48,000,000 * 12000bits)/1,000,000,000 = 576Gig bps

On the face of it, thats not bad at all is it...i mean...576gig per slot ! awsome..! and superflous, as the most any line card can present from its interfaces is 80 gig (8x10 gig line card, 2:1 over/sub)


The cef720 cards connect to the fabric with a 40gig connection which is made from 2x20 gig paths, now I know that utilisation of these two paths will rarely be equal in terms of load because each path services a number of ports on the line card, but lets assume that we have 40 gig to play with because the reality depends on the flows across the card.

So the output onto the fabric from this card is limited to 40gig, so in reality the performance figures for a dcef card look more like this....

Maximum through put at 1500byte packets over 40gig of fabric path is...(per slot)

40,000,000,000bits per second / 12000bits per packet = 3.333333 mpps

and at 48 mpps...the maximum packet size which can sustain that rate is...(per slot)

(40,000,000,000bits per second / 48,000,000pps)/8 = 104.1bytes

I know some of you guys really know the 6500 well so does this look like a reasonable illustration to you (usuall caveats accepted of course).

And I have a second question, With dcef256 cards, there are two local shared buses which serve two banks of ports. Traffic going from one bank to the other must be switched over the crossbar fabric, happy with that. Now the cef720/dcef720 card seems to be very different.

How does the cef720 card without DFC handle this scenario ? The archetecture paper shows a CFC with a local switching bus between the port groups, does this mean that some local switching (how..) occurs ? or does it really talking about the access methodology used to get packet headders onto the Dbus (assume compact mode)?

The Dcef720 has a DFC in place of a CFC, in this case how does local switching between groups happen ? is traffic passed between groups via the local DFC or, like the dcef256 does it have to go via the crossbar fabric on the sup720 (i.e. out of one path and back in on the other) ?

I know, lots of questions, but interesting stuff nonetheless.



Super Bronze

Re: DCef720 Throughput calculations - Idiot Check Request

I've usually seen the equation that for 64 byte Ethernet packets, you need 1.488 Mpps to drive 1 Gbps. If true, then the DFC at 48 Mpps can drive about 32 Gbps. So for 64 byte packets, the limiter could be the DFC, but for larger packets, the bandwidth connection to the card, up to 40 Gbps with sup720, is the likely limiter.

Without DFCs, I recall the central forwarding rate for the sup720 is 15 Mpps for non-fabric connected and 30 Mpps for fabric connected cards; this for the whole chassis.

I also recall the older CEF256 cards have either one or two 8 Gbps fabric connections. With DFC, their bottleneck should be the fabric interface.

I'm unsure about traffic bottlenecks moving between ports on the same card. Suspect much depends upon the architecture of the card and whether it has a DFC.