Hi, I was almost sure i had seen this discussion on the specific difference in the buffering on these 2 cards however I am unable to find it so would appreciate some help.
1. Per documentation it is well known that the 6748 has a lower per-port tx (1.2 or 1.3 depending on the document) buffer than the 6148A (5.2 Mb). I would like to know how this affects this module and why is it better to use the 6748 in a high bandwidth environment (Assuming we take out the DFC). The 6148A is a classic line card and as such would use the classic bus wheras the 6748 would use the 2 x 20Gb connection to the fabric but doesn't the per port buffering become critical if there is a large amount of traffic rate.
2. Wanted to confirm something regarding drops and buffers. Since these buffers are Tx buffers wouldn't they be critical only when the port is an outgoing port (i.e traffic leaving the switch) since in most cases the Rx buffer will only be used to hold the frames until we get access to the fabric but in the case of the 6748 it has dedicated connection to the fabric so it shouldn't be a problem. Am I correct in my assumption?
3. When would the Tx buffers become critical either for the 6748 or the 6148.
Thx for replying. Could you please elaborate a bit on your reply in point 1. I was referring to this document "http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps708/prod_white_paper09186a0080131086.html" and per my interpretation of what is written here in the second paragraph of section "Overview of Buffers, Queues, and Thresholds" it says
"In the Catalyst 6500 architecture, access into the switch fabric itself is almost never the bottleneck. Rather, on the transmit side, one or several ports are the likely destination for a majority of the packets entering the switch. As such, the receive-side port buffers on the Ethernet modules are relatively small compared to the transmit-side port buffers."
You need to read between the lines. The 6148 does not have a connection to the switch fabric, it connects to the Supervisor via the bus (32Gbps shared connection). The 67xx modules have access to the switch fabric.
With regards to the 6748 would it possibly help to make use of the buffers on the Janus Asic. Is that even possible? Per my understanding if its bursty traffic than this could help however am very hesitant as this might affect all the ports on the Rohini asic which I believe is the equivalent of the Coil asic
Buffers add latency into the data flow. You don't want latency in a switched network so large buffers in a line card can be counterproductive.
While large buffers in the classic line card can be a marketing ploy for competitive reasons, classic line cards are often targeted for workstation connections. You don't want large buffers & latency while connecting to servers and interswitch links - you want the packet to have the same latency and speed entering and exiting a switch hardware. To mitigate the lack of buffers, it is often recommended to configure flow-control.
Please remember to rate helpful posts, thanks.
Thx for replying. I had a few follow questions regarding the buffers
1. Wouldn't the higher per port buffer on the 6148 make it better for high throughput connections since it can hold more in memory hence suffer less drops? the connection to the fabric is dedicated on the 6748 hence it can switch traffic out faster however if the egress port is getting getting a lot more traffic than it can handle it would have to store the frames in the buffer. Wouldn't that be more critical specially in a situation where the egress port is a uplink hence all ingress ports are sending traffic to the egress port.
2. With regards to the drops if the Tx-buffer is full than would that be indicated by the higher number of output drops on the counters?
3. If the output buffers become full on the coil asic and than the buffers begin to fillup on the pinnacle hence causing Head of line blocking would this be reflected in the input drops on the counters?
4. If a system has both 6748 without DFC and 6148 line cards than how is the traffic going to be sent from the 6748 input port to the 6148 output port for example being that the 6748 does not send traffic over the shared bus and the 6148 only connects to the shared bus.
5. Would the traffic between Fabric only line cards be affected as a result of the classic line card being present or would it only affect traffic that goes between the fabric only and classic line card. I always thought that traffic between fabric only line cards would not be affected even if a classic line card was present. Pls confirm
Thx for your help.
Okay, i'll answer the ones i can but i don't know the answer to all questions. Hopefully someone else can add to this -
1) Not really no. A 6148A cannot get anywhere near the throughput performance of a 6748 for the reasons i covered. So lets imnagine you have 40 servers you want to attach to a 48 port module. With a 6748 all 40 servers can in theory receive their full gigabit allowance from the module. With a 6148A they would get nowhere near.
Now about the egress port being an uplink that aggregates all the ingress ports throughputs. Again, the 6748 would be able to overload the buffer quicker but that doesn't mean the 6148A is better. And if you are aggregating many into one or two for example then you would assume the traffic patterns had been taken into account in the design and this oversubscription was acceptable.
Again it is important to note you are talking about congestion situations, but these don't happen all the time and if the network has been designed properly in terms of server location, layout then you can mitigate some of these.
2) Should be yes.
3) Don't know.
4) Because the supervisor has both and hence can transmit packets between the 2.
5) Yes it would. Have a look at this doc and search for "truncated".
Basically if you are only doing centralised forwarding and all your line cards are fabric then the switch can achieve up to 30Mpps using compact mode switching.
If you add just one classic line card then the switch has to go to truncated mode. In truncated mode with centralised forwarding the switch can only achieve 15Mpps.