Welcome to this Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about modern data center network infrastructure architectures and design using Cisco Nexus 7000, 6000, 5000, and 2000 Series Switches. This discussion is intended to cover the different design aspects and considerations and not configurations and troubleshooting. This discussion is hosted by Cisco Designated VIP Marwan Al-Shawi.
Note that data center SAN (Fibre Channel, Fibre Channel over Ethernet), servers, and server virtualization will not be covered during this discussion.
Marwan Al-shawi is a solutions architect with Gulf Business Machines GBM, one of Cisco’s large international Gold Partners headquartered in Dubai. He has also worked as a technical consultant with Dimension Data Australia, a Cisco Global Alliance Partner; network architect with IBM Australia global technology services; and other Cisco partners and IT integrators. He holds a master of science degree in internetworking from the University of Technology, Sydney. Marwan is currently one of few Cisco Certified Design Experts (CCDE 2013::66) and holds other Cisco certifications such as CCNP, CCSP, and CCNP Voice.
Remember to use the rating system to let Marwan know if you have received an adequate response.
Because of the volume expected during this event, our experts might not be able to answer every question. Remember that you can continue the conversation in the Solutions and Architecture community under subcommunity Data Center and Virtualization shortly after the event. This event lasts through May 23, 2014. Visit this forum often to view responses to your questions and the questions of other Cisco Support Community members.
Thanks for hosting the Forum. Could you please tell me how to guarantee a 100% uptime in a data center running 7,6,5,2K during the upgrade process. I have access to all the documentation on CCO. I want to listen to your view point from a design perspective. Please be as Technical as possible in describing design considerations keeping the Given SLA in mind.
Thanks for your question
I am assuming here you mean software upgrade, in this case for sure first feature you need to consider is ISSU where applicable, second you need to make sure that you have redundant paths and devices from the access to the DC cor "end to end" so you can have 100% up time by doing the upgrade on phases/stages per layer per device taking into account traffic load during the downtime of any device in case a reload is required
in other words you can approach it in different ways but for 100% up time you must have redundant paths and devices from the access ports to the Core/DC edge
Hope this help
Thanks Marwan for the information,
I don't think I framed my question concerning EvPC correctly.
The N5K guide states that each N2K needs its own vPC uplink, vPC 100 for N2K1 & vPC 101 for N2K2 as an example.
See below the attachment named 5K.
However the diagram below named 7K from the N9K design guide implies that you can have a common vPC from both N2Ks, say for example vPC 100.
Hence my question, is the 7K diagram invalid for vPC connectivity between N5Ks and the N2Ks?
I see what you mean here
, "from the link that you put" it is clearly mentioned 2x FEXs can share same vPC but more than two is not supported
" With Enhanced vPC, the port channel can be formed among ports from up to two FEXs that are connected to the same pair of Cisco Nexus 5000 Series devices. This topology which is shown in Figure 6-7, does not work and is not supported. " " The CLI rejects the configuration when it detects the port channel members are from more than two FEXs. "
for the second diagram it is showing the logical view where you have your traffic inbound and outbound logically going over different paths and different devices
hope this help
I have a couple of questions concerning EvPC, the F2e line card and licensing.
Question 1: EvPC.
See Figure 9 from the N9K design document: http://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/guide-c07-730115.html, vPC all the way from N7Ks to N5Ks to N2Ks. Is there an error in this diagram? I thought with EvPC between the 5Ks and the N2Ks, each N2K required its own vPC uplink. However the diagram shows a common vPC between the two N2Ks.
It doesn't match the topology outlined in this link, see Figure 6-3: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/mkt_ops_guides/513_n1_1/n5k_enhanced_vpc.html#wp1161967
Question 2: F2e and M line cards in the same VDC
The data sheet for the F2e line card states that, "When deploying the Cisco Nexus 7000 F2e-Series Fiber Module in a VDC together with the Cisco Nexus 7000 M-Series modules, the Cisco Nexus 7000 F2e-Series Fiber Module will run in Layer 2-only mode, delegating all Layer 3 capabilities to the Cisco Nexus 7000 M-Series modules present in the VDC"
- What if the M line card in the VDC failed? Will the F2e revert to Layer 3 capabilities or can it still only perform Layer 2?
- When mixing the F2e and M line cards in the same VDC do you require a VDC license?
Question 3: Licensing a N7K
If a N7K was purchased with just F2 line cards and no licenses, does this mean that all this switch can perform is Layer 2 functionality?
Thanks for your participation,
please see the answers below:
Q1: Enhanced vPC can provide the ability to multi-home the access FEX/N2K to two different parent N5K and at the same time from the host/server side you can multi-home the NICs to to different FEXs/N2K it is called enhanced because the earlier version of vPC was not supporting this type of connectivity
- for the M and F2e there are different L3 capabilities, also if the M line card failed this dose not mean the device can revert back L3 functionality to the F2e in all cases, so if you want to go with this design even if its supported its gonna be a bit confusing, instead I would recommended you to have redundant M line cards to cover a situation where one of the M line cards fails
- VDC license only required when you want to configure multiple or non default VDC
Q3: for routing you need L3/IGP license which can be part of " Enterprise Services Package", if you want L3 obtain its license because even if you can get any L3 function by default without a license , it will be very limited for a powerful device like N7K
hope this help
Currently we are doing refreshment to our Data Center infrastructure and we are migrating the aggregation switches from 6500 to nexus 7700, however we still need to keep some of the 6500 switches to utilize the existing service modules such as LB and FWs. My question is how we can integrate this into the our new aggregation layer using nexus (e.g. VRF, VDC) - what do you suggest?
Thank you for your help.
the typical approaches for this type of scenario is as you mentioned either by using VLANs with VRFs or multiple VDCs
my suggestion is to go with VDC where possible in the case this design is not just temporary for the migration phase because it provides better isolation between the routing environments, and more robust, since VDCs are separate virtual switches with completely different sets of processes and physical ports
hope this help
Hope OTV is included in this forum, if not then no issues...
I have deployed OTV across two datacentres using N7K's and have allowed a number of VLANS for systems that need to reside in the same VLAN across the DC's. We are now in the process of deploying F5 LB's (GTM and LTM's) and they will sit in both DC's to load balance and provide resiliency for some of our applications. The question I have is that, these systems will sit in these OTV VLAN's but their default gateways will be the F5 LB's and not the SVI's on the N7K. How does this affect the OTV operation and will it still work?
The traffic flow will such that the destination IP for these applications will be the VIP on the F5 and then the F5 will forward the traffic to the backend servers in the server pools. The servers will always reply back via the F5. Just trying to figure out how the MAC's may alter the OTV operation?
this is interesting question, as you know with extended L2 there is always difficulty with the inbound and outbound of the traffic
if you use LB source NAT the server will reply to the LB where the traffic originated from you may use GTM to redirect the traffic to the appropriate DC/LB, again check the vendor documentation if they support that !
in any way OTV operation will not be effected because all what it dose is providing the L2 extension , what you need to take care of is your L3 traffic flow
hope this help
How does fabric path handle multiple HSRP gateways between the leafs and the spines? Can there be more than two active at the same time for example?
as you mentioned with Cisco Fabric path there might be in some scenarios multiple upstream/spins switches and need to be capable to forward/response to HSRP, Cisco utilize a concept with fabric path to handle this called anycast hsrp where all the pines can forward your L3 upstream traffic
hope this help
Hope you are doing great, I have a question related to Cisco Nexus.
as we all know Cisco Nexus is fagship product for data center similar to Q Fabric in Juniper.
however in Juniper, entire Qfabric is seen as single switch (no spanning tree problem). with Qfabric, you can connect 128 extension switches which looks like single switch, while in nexus, we have spanning tree problem so you can not utilize entire bandwidth at the same time. you have ability to only use half of the available ports. (things can be done by giving high priority to certain vlan from one switch and other vlans on other switch). but like Juniper, Cisco do have any plan to build a product like qfabric where we can get rid of spanning tree issues forever?
thanks for your question
well Cisco since years introduced several technologies and architectures that meet next generation DC needs even before Qfabric comes into the market
Cisco utilize different concepts and architectures to avoid the reliance on STP e.g. vPC, Fabricpath
Cisco FabricPath is something that you can compare to QFabric but Cisco architecture is different however the end result is the same, not to mention Cisco FabricPath is more scalable and simple when it comes to the design and implementations
for more details please refer to the following links
hope this help