We have recently deployed Nexus 5k along with Fiber extenders N2k. Problem that we are having is that N2k does not support 10/100, as well as does not allowed BPDU on its ports meaning we cannot even connect any other switch to make our 10/100 NICs functional. Is there any solution other than to connect a 2950/3560/3750 on the Core 6500 switch and then connect 10/100 NICs??
Uplink:Because the 2k is a fabric extender controlled and configured from a N5K. That simple. Also there is 40 GE uplink to the 5K.
Downlink: there is no spanning tree.
The 2k is not a switch its a NIC extender, hence you cant connect switches on the downlinks.
That is true the 2ks do not support 10/100. Do you have an option of adding a gigabit card to the servers that only have 10/100. If not, then you will need to bring the 10/100s back to your cores.
Yes you can however since the N2K ports are hard coded as STP edge ports which means that you should not sedn ay BPDUs from the downstream swtich.
The easeist way is to run flexlink in the downstream switch and configure the attached port as the active interface. STP should remain configured on the downstream switch and BPDUGuard enabled on all ports other than the uplink port to N2K.
Please nte that is not an ideal design and the downnstrem should be used primarly for low rate applications such as iLO.
IT will be better if you can connect your catalyst switch directly to N5K so that STP will remain nebaled.
For sure there are restirctions but it is possible to configure it.
Hi Hatim, thanks for the update. Yes we need to connect only the management/iLO interfaces.
Can you please explain this a bit more?
The easeist way is to run flexlink in the downstream switch and
configure the attached port as the active interface.
For more details regarding Flexlink please go to (for 3750 switches)
Starting 12.2(44)SE it is possible to specify which VLAN’s should be forwarded on the backup interface
Im a little confused here. Do you gain anything by plugging a catalyst switch into a Nexus 2k? Why wouldnt you just run the catalyst back to your cores. I thought the purpose of the Nexus switchs was to be used as top of rack and that all the Nexus switches 2ks and 5ks would be one virtual switch with one management interface. To plug a 10/100 type switch into the Nexus does not buy you anything and defeats the purpose of the Nexus design unless I have been miss informed.
The intention of the fabric extender is to fan out connectivity to your servers and connect them through a Nexus 5000, while reducing management overhead. It sounds like the original question is to handle the cases where 10/100 speeds are mixed throughout the server block.
The intention was never to daisy-chain another switch off of the fabric extender. Although you could possibly configure this since BPDU-guard can be turned off in a future release to handle certain server chassis designs. This would not be considered a recommended design - you are correct.
I don't think a 10Mb/s fabric extender is in the works, but later this year we will have a 100/1000 fabric extender - N2248T. I think this is the best solution for a older 100Mb/s servers in the server block.