I looked up the 10gigabit datasheet for the 6500's but it doesn't have anything in terms of oversubscription. It only states that Port Buffers for the 6704 is 16mb while the 6708 is 200mb.
1) What do port buffers mean
2) Do these modules have groups of ports connected to a single ASIC similar to the 6548 modules which connect ports 1-8 to one ASIC and so on.
3) Or is it as simple as considering that since both these modules have 40gb connections to the backplane, the 6704 provides 10gb to each port, hence no oversubscription while the 6708 has 8 ports which totally have to share 40gbps hence an oversub of 2:1
My final application for this is to link up my distribution switches to the core
1) The port buffers are used by the module to store frames whilst switching decisions are being made or for queueing for transmission when the packets are being received faster than the physcial port can transmit them.
2) Not as far as i know.
3) Yes pretty much. On the 6708 you have a 2:1 oversubscription as you can have 8 ports each capable of running at 10Gb and a 40Gbp connection to the switch fabric.
In addition to Jon fine points, I want to expand on bullet point number 3.
6704 comes with CFC by default and can be upgraded to DFC.
If you are planning to purchase multiple 6704, CFC may not be the best option as the headers are sent to the supervisor, you are using the 40Gbps switch fabric bandwidth.
6708 comes with DFC, thus if you plan to purchase multiple 6708s, any traffic between these modules won't transit the switch fabric at all. It will go from DFC to DFC (line rate).
The 6708 also provides DSCP-based queue mapping while the 6704 only provides CoS-based mappings, which IMHO, it's very useful to have while performing Layer3 QoS.
If money isn't an object, I recommend going with the 6708 for future growth.
Ok, I'm going to tangent into a DFC convo
Module 1: 6704 (CFC)
Module 2: 6708 (DFC)
1) Now, when a packet initiated on module 2 has a final destination on module 1, will the packet pass through the supervisor? In other words, are both the source & dest modules need to be DFC-capable or only the source?
2) Also, do you define 'line rate' when there's no oversubscription? You seem to be using the term differently. I mean even with DFC, there can still be potential oversubscription with a 6708 card. Right?
3) Apart from needing the line card to support DFC, is there any additional module needed DFC apart from the Sup720-3B I'm ordering?
4) What's the command to turn on DFC? Is it just 'ip cef distributed'?
5) Lastly, I see the top contributors over here quite often working for Cisco? Just curious but do you guys get incentives from Cisco for gaining points or you simply do it for love of technology?
1) Not the entire packet, just the header (~32k)
While having both DFCs, the header and the frame are switched locally
2) No, as Jon pointed out, which I agree. There is a 2:1 oversubscription for fabric-bound traffic.
5) I simply do it for the love of technology and keeping my skills sharp.
For more information on the 6708, please see:
What if I have (2) 6509's with (2) 6508's in module 5 and these modules are connected via 10gb fiber connections. Now, what does the traffic flow look like with these DFC-enabled cards but sitting on two different chassis?
"6708 comes with DFC, thus if you plan to purchase multiple 6708s, any traffic between these modules won't transit the switch fabric at all. It will go from DFC to DFC (line rate)."
I am a bit confused about this and am wondering if i have misunderstood exactly how DFC's work, but do you mean that a packet received on one DFC module with a destination on another DFC module will not traverse the switch fabric ?
Because i thought that it was only packets local to the module that did not send anything (header, data etc.) onto the switch fabric.
Your understanding on DFC is correct.DFCs make local forwarding decisions.When your outgoing port is on another DFC the packet should go via fabric to reach the destination line card.
I think, in this thread, he means the case where the ingress and egress ports are on the same DFC(line card).Only this scenario does not send any packets through fabric.
So if the source/destination are on different blades and your traffic is majorly of this pattern, there's no advantage to having DFC's at all. Right?
Not that way.Still your decision making is done locally(on the incoming DFC).And packet is fwded out of the fabric to reach the outgoing line card.Syetem performance increases drastically since you make local forwarding decision and it is far better than depending on the supervisor to make all the forwarding decisions.Over all throughput of the system increases.
Please check for a detailed understanding.
It provides the comparision of distributed and centralised forwarding.
However, when a DFC packet gets sent to the switch fabric, the switch fabric is still on the supervisor Engine? It's just that the "forwarding descision" was made on the ingress DFC equipped line card. In that case the 6500 system performance increases because if the line cards make that desision then it releives the supervisor from havng to do it.
However, I am correct in sayng that the backplane on the 6500 chassis has all the traces from the line cards terminating on the Supervisors? There are no intercard traces, i.e. slot 2 can't send packets directly to slot 3 (without them travsersing the supervisor)? In other words, slot 2 does a lookup on its DFC and sees the packet needs to go to slot 3. The packets get routed to slot 3 via the backplane traces via the supervisor i.e the supervisor crossbar is essentially a set of wires interconnecting the cards?
Just found this on Cisco site while i was looking for something else. Seems relevant to the discussion
Q. Is the 8-port 10 Gigabit Ethernet module not oversubscribed if I only use half the ports?
A. Yes, you can use only ports 1, 2, 5, and 6 to provide 40 Gbps local switching. To make it easier for you to configure your network, we have a new software command for you to go into performance mode. The software command
router(config)#[no] hw-module slot x oversubscription
will administratively disable the oversubscribed ports (ports 3, 4, 7, and 8) and put them in "shutdown" state. In this mode, the user cannot do "no shut" on the disabled ports. When user do "show interface" on the disabled ports, the output will show "disabled for performance" to distinguish between normal port shutdown and shutdown for performance.
No need to apologize, it was in jest :)
You've made excellent points on this thread and you've corrected several mistakes in my postings. Thanks !
You are correct, I misspoke. I was trying to say the Supervisor.
I found this very good link
Edit: Oops, already posted by Arivudainambi
That's a pretty good link and reading the two examples at the end for CFC & DFC, it seems that only the source line card needs to support DFC (DESTINATION doesn't have to)
1) Read last example on that page and tell me if you agree
2) And now I also understand that data travelling between line cards is always passed through the fabric; however the lookup need not involve the supervisor
3) Can CFC line cards be upgraded at a later time by adding daughter DFC modules or do they have to be completely replaced?
4) I'm a little confused about when to use DFC vs CFC. After all, any traffic will always have a source & destination off one of the line blades (except when I'm using the supervisor GBIC slots). Hence, are there any particular applications I would use one over the other? My SE seemed to suggest that I should use it for cluster servers, etc but I don't understand that clearly
1) Yes that's the way i read it as well.
2) Correct. DFC no data to supervisor. CFC - depends on the switching mode of the 6500 ie bus/truncated/compact
3) Yes except classic linecards are not upgradeable.
4) As already pointed out using DFC even if the data needs to go across the switch fabric means the forwarding decision is made on the local card.
If you have servers that require very high throughput then it would make sense to put these on the same module with DFC enabled.
Be aware that when you start mixing CFC, DFC cards + fabric enabled / classic performance will start to degrade.
I'm getting the 6708 for my core and that line blade has no CFC option at all hence it's automatically DFC.
The ports on this line blade connect to the server farm block switches as well as distribution switches. Not sure if this is a good application though but I'm sold on the 6708 anyways so DFC is like an add-on benefit. For servers, I'm going to stick to CFC but tell the customer that a further analysis done in the design phase may warrant doing DFC and if so, we'll simply upgrade the WS-6748-GE-TX cards with daughter cards
Please try "show fabric utilization"