Nexus 5000 Switches, Nexus 2000 FEX and multiple links
We have 2 Nexus 5010 switches (SW-A and SW-B) and two Nexus 2148T fabric extenders (FEX-A and FEX-B). Currently FEX-A is connected to SW-A and FEX-B to SW-B. The intent is to have almost every server or storage device dual-homed to a FEX on a different Nexus 5000 switch for path redundancy.
I've been looking through "Cisco Nexus 2000 Series Fabric Extender Software Configuration Guide, Release 4.1", specifically the "Upgrading the Fabric Extender in a vPC Topology" section on pages 21-22 to try to find out what happens if a FEX is dual-homed to multiple switches.
I understand that if one or more FEXs are connected to a single switch, that when the firmware on that switch is upgraded, all attached FEXs will reboot along with the switch. What I can't clearly determine is if I dual-home both FEX's to both Nexus 5010 switches, is it possible to upgrade/reboot one switch at a time without rebooting all the FEX's at once. If dual-homing FEX's means taht all FEX's would reboot at the same time, that sort of defeats the point of multiple network paths to the attached servers.
Re: Nexus 5000 Switches, Nexus 2000 FEX and multiple links
There are two important consideration that you need to be aware of regarding uplinking or dual-homing a single N2K FEX to unique N5Ks in order to provide resiliency, firstly it is only possible to support 12 FEXs maximum. Therefore connectivity is dependant on your Top-Of-Rack design and the actual port-density that you wish to obtain. For example if each of the 12 N2Ks are connected two separate N5Ks, then you are limited to only 12 * 48 (ports as each N5K is associated with 12 N2Ks even though they are in fact the same switch.
Resilency depends on the functionality of your host, whether they are configured for active/active or active/stanby as this will utimately determine your network topology. The are essentially two deployment scenarios one for each end-host mode basically:-
The image above depicts the deployment scenario where active-active is required for the end-host, there are two benefits from this design, one is that each N5K can each support 12 N2Ks (576 Ports) as they are completely independent except for the fact the end-host is configured within the same vPC in order to achieve active-active on the host which is the second benefit. Although the N2Ks are not redundantly uplinked to the other N5K, each N2K can be connected with upto 4x10Gb interfaces in a single Port-Channel, and even if the whole PO and connectivity was lost to the N2K in that instance the end-host would still be connected and forwarding via the NIC connected to the other N2K which is connected to a different N5K Peer-Switch.
The important note here is that active-active is not supported where each N2K would be connected to the other N5K Peer-Switch as depicted below:-
Here the end-host is configured for active-standby and each N2K can be dual-homed to the other N5K in this scenario. Thus you need to consider how you intend to deploy your end-hosts and your Top-Of-Rack design. Yes Active-Standby gives you resiliency if you lost an uplink to one N5K peer-switch however this would not be a problem if they were configured within a Port-Channel. If you lost all uplink then the N2K would go down and standby NIC should then forward upon link-failure of the active. The drawback here is that you limited only connecting 12 N2Ks in total across both N5Ks, and you are also required to configure the active and standby interfaces on both N5Ks. Let me explain, lets say the active interface is on Eth100/1/1, this interface would actually exist on both N5Ks therefore if you changed any port characteristics you would have to change them on both N5Ks, if they were different then the interface would remain down.
In answer to your actual question, if your N2Ks are dual-homed as above then essentially when the NX-OS is upgraded on the N5K then each connected N2K would indeed upgrade as result. This is particularly difficult to manage if they are connected within a vPC where both uplinks are a logical channel, without vPC then STP is required, in this situation whichever N5K discovered the N2K first assumes control and the interface are available, however the other N5K would not be able to manage these interface until the uplink to the other N5K was lost. So it is possible to isolate the N2K but not advisable, and since vPC became available in 4.1.3 aggregating the links and avoid STP would be recommended.
In order to minimise downtime through an upgrade, then active-active will enable you to upgrade a single N5K and only affect the N2K which are locally connected. Connectivity will still be acheived through the remaining active interface on the host via the other N5K.
As you can see there are different possibilities so you need to chose your design carefully.
Question We run asr9001 with XR 6.1.3, and we have a very long delay to
login w/ SSH 1 or 2 to the device compare to IOS device. After
investigation, the there is 1s delay between the client KEXDH_INIT and
the server (XR) KEXDH_REPLY. After debug ssh serv...
Introduction The purpose of this document is to demonstrate the Open
Shortest Path First (OSPF) behavior when the V-bit (Virtual-link bit) is
present in a non-backbone area. The V-bit is signaled in Type-1 LSA only
if the router is the endpoint of one or ...
Hi, I am seeing quite a few issues with patch install and wanted to
share my experience and workaround to this. Login to admin via CLI, then
access root with the “shell” command Issue “df –h” and you’ll probably
see the following directory full or nearly ...