Nexus 5K/2K Architecture with respect to Packet Forwarding

Unanswered Question
Sep 3rd, 2010

Hi, I am trying to get a better understanding of the Nexus 5K/2K with respect to how packet forwarding occurs and comparing it to the 6Ks. The Nexus 2K is a Fabric extender and it is documented that we need to treat the 2K as a line card. In addition it also seems that when traffic needs to go from east to west or from 1 port on the 2K to another port on the same 2K than the entire packet (header + data) is forwarded to the Nexus 5K.

Would this operation be similar to how a 6K chassis works with classic line cards in Flow-through mode where the header + data is forwarded to the Supervisor?

If that is the case above than won't this affect the forwarding performance and latency on the Nexus 5K and in such traffic patterns (east to west) won't the 6K give better performance from a forwarding and latency perspective if you are using a 6K with classic/fabric enabled cards in truncated mode or fabric enabled only in compact mode and if your egress port is on the same line card as your ingress port?

if you are using a 6K with DFC enabled line cards than won't the 6K give even better performance since the 6K with DFC modules will perform a local lookup and if the egress port is on the same line card than it won't have to send the data across the crossbar fabric at all hence achieving better numbers for latency and throughput?

I guess the nexus does makeup some of this by being a cut-through box but still just thinking about how the entire header+data needs to be sent to the N5K and than sent back down to the same fex would be a lot higher than doing it locally.

In addition won't the distance also come into play now since the Fex can be placed anywhere in the DC and if this happens to be a lot further away from the N5K than won't propogation delay  also be a factor albeit miniscule but still compared to a line card within the same chassis it still will add up won't it?

I apologize for asking so many questions but am trying to understand the architecture better as compared to the 6Ks and the benefits/disadvantages. Appreciate any feedback and corrections if the above is incorrect. Thx for your help.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
kitanaka Mon, 09/06/2010 - 21:41

Hello Vadlaney,

Fabric Extender does not support local switching.
It is managed by N5K and packet has to be forwarded to N5K for forwarding.
Cat6500 and N5K plus FEXs architecture differs in several ways.
The point that forwarding decisions for all packets are done on the centralized system, here it means Sup in the case of Cat6500 flow-through mode and
N5K in the case of "N5K plus FEXs" system, is similar. However N5K has multiple ASICs that process forwarding lookup for each port blocks,
so actually forwarding is done at multiple points.
Thinking about Cat6500 flow-through mode, ingress LC floods entire frame into shared bus and all LCs stores it until
they receive lookup result from Sup that is out of date architecture.


I think the advantage using N5K plus FEXs is that
the latency numbers remain the same for any server ports from one FEX to another server
ports in another FEX through Nexus 5000. Using 10G port for ingress it will be
entirely cut-through mode and result constant latency regardless frame length.
With Cat6500, it might differ depending on switching mode, LC/port combination and frame length.
The shortest path of the life of a packet in Cat6500, which you mentioned the case of "not have to send the data across the crossbar fabric"
is limited to only few ports. With regard to WS-X6704-10GE, each two front panel ports are terminated by different forwarding instance.
e.g A frame ingressing to port-a on WS-X6704-10GE egressing to port-c on the same WS-X6704-10GE still need to be sent to switch fabric.

> In addition won't the distance also come into play now since the Fex can  be placed anywhere in

> the DC and if this happens to be a lot further  away from the N5K than won't propogation delay

> also be a factor albeit  miniscule but still compared to a line card within the same chassis it  still will add up won't it?

I think that long fiber cable does not significant latency
but the matters are latency happened from NIC to cable vice-versa.
It is 0.4usec for receiving and 0.1usec for sending with regard to FEX fabric.

Regards,

Kimihito.

Actions

This Discussion