cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8412
Views
15
Helpful
11
Replies

Ask the Expert: Deploying Cisco FabricPath in Data Center NetworkFabricPath

ciscomoderator
Community Manager
Community Manager

With Anees Mohammed and Viral Bhutta

Deploying Cisco FabricPath in Data Center Network FabricPath with Anees Mohammed and Viral BhuttaDeploying Cisco FabricPath in Data Center Network FabricPath

Welcome to the Cisco Support Community Ask the Expert conversation. This is an opportunity to learn and ask questions about the Cisco FabricPath with Cisco technical support experts Anees Mohammed and Viral Bhutta. You can ask questions about FabricPath as a protocol, MultiDestination Trees, virtual port channel+ (vPC+), General Design Considerations, General Best practices, FabricPath migration, and how to troubleshoot FabricPath on Nexus 7000, Nexus 6000 and Nexus 5000 Series switches

Anees Mohammed is a Network Consulting Engineer for Cisco Advanced Services where he has been delivering plan, design and implementation services for the enterprise class data center networks with leading technologies such as virtual port channel (vPC), FabricPath and Overlay Transport Virtualization (OTV). He has 10 years of experience in the enterprise data center networking area and has carried various roles within Cisco such as LAN switching content engineer and LAN switching TAC engineer. He  holds a Bachelor’s degree in Electronics and Communications and has a CCIE in Routing and Switching

Viral Bhutta is a Customer Support Engineer for Cisco’s Data Center Switching Technical Assistance Center (TAC). He began his Cisco Career as TAC engineer for local area network (LAN) Switching in 2009. He is now a technical lead for the data center switching team which supports Nexus 7000, Nexus 6000, Nexus 5000, Nexus 4000, Nexus 3000 and Nexus 2000 (FEX) Series Switches. He has technical expertise in virtual port channel (vPC), overlay transport virtualization (OTV), FabricPath, Spanning Tree Protocol, quality of service (QoS),  and Multicast. He holds a Master’s degree from University of Southern California in Electrical Engineering and has CCNA and CCNP certification..

Remember to use the rating system to let Anees and Viral know if you have received an adequate response.  

They might not be able to answer each question due to the volume expected during this event. Remember that you can continue the conversation on the Data Center sub-community Unified Computing discussion forum shortly after the event.

This event lasts through May 31st, 2013.

Visit this support forum often to view responses to your questions and the questions of other Cisco Support Community members.

11 Replies 11

Sarah Staker
Level 1
Level 1

Hello,

I have a question on the mac learning in FabricPath network. If the access layer switch learns the mac address of the server, does it advertise this mac address to FabricPath network using IS-IS just like OTV does?

Thank you for your prompt response.

Sarah

Hi Sarah,

Great Question. We just advertised the switch IDs using ISIS and not the mac address.

That is one of the reason which makes fabircpath highly scalable

To Add to what Viral Says, OTV does control plane learning whereas in FabricPath MAC learning is via Data Plane. In summary, the access layer switch will not advertise the learned MAC address via IS-IS.

dave.cole
Level 1
Level 1

I have a couple questions.

1) Does the peering caveat with vPC still exist for vPC+?  I want to have a pair of ASA firewalls speaking EIGRP over multiple 1GbE interfaces to a pair of Nexus 7k's using vPC+.  Of course, the vPC peer-link is over the FabricPath port channel between the Nexus 7k heads.  (n7009, sup2, F2e line cards)  Technically, I want to go so far as to have multiple ASA contexts, potentially each speaking EIGRP, with lots of vLANs, and the vPC/l3 limitation is a challenging one to work around.

2) And boiled down, in a n7k vPC deployment, there are caveats about keeping traffic off the vPC peer-link.  In a FabricPath deployment, the vPC peer-link is over FabricPath.  Are we as concerned about keeping traffic off that link, or do we care anymore because it is foremost a FabricPath link?

1) Does the peering caveat with vPC still exist for vPC+?  I want to  have a pair of ASA firewalls speaking EIGRP over multiple 1GbE  interfaces to a pair of Nexus 7k's using vPC+.  Of course, the vPC  peer-link is over the FabricPath port channel between the Nexus 7k  heads.  (n7009, sup2, F2e line cards)  Technically, I want to go so far  as to have multiple ASA contexts, potentially each speaking EIGRP, with  lots of vLANs, and the vPC/l3 limitation is a challenging one to work  around.

Answer:

You still have this limitation if you connect a ASA dual homed to N7k in a vPC+ environment. However you don't have this limitation if you have devices lives wthin the FabricPath cloud and forms EIGRP with Nexus 7k. FOr example refer the below scenario

        Nexus 7k (vPC+) -- FabricPath Links -- Nexus 5k (vPC+) - vPC - ASA

In this ASA is dual homed to Nexus 5k using vPC+. Nexus 5K in turn connected to N7k using FabricPath. N7k is still running vPC+. In this case ASA can form EIGRP with N7k with no issues.

2) And boiled down, in a n7k vPC deployment, there are caveats about  keeping traffic off the vPC peer-link.  In a FabricPath deployment, the  vPC peer-link is over FabricPath.  Are we as concerned about keeping  traffic off that link, or do we care anymore because it is foremost a  FabricPath link?

Anees:

The limitation for Routing over vPC is still there in FabricPath. But there is no limitation of Routing over FabricPath. It is a good news for the customers havng devices live in the Access layer which requires routing peering with aggregation layer.  I can not think of any othe caveats. if you can point me some and i can answer specifically. Also I did not see any requirement to have a separate links in addition to vPC+ peer Link.

Sarah Staker
Level 1
Level 1

Hello,

I need some more help.

My Data Center is currently using Catalyst 6500 series switches in a typical three tier model Core--Aggregation-Access. We would like to build a new Network with FabricPath.

We would like to connect both the new and existing aggregation layer switches using layer 2 trunk. How would this spanning tree and FabricPath domain will interact?

Thanks a lot,

Sarah

You may connect the IOS Aggregation layer switches to the Nexus Aggregation Layer switches using the spanning tree. The requirement is that the spanning tree for the vlans you are extending MUST be Root on the Nexus Aggregation Layer switches which are running FabricPath.

Hence the spanning tree topology would look like this..

Note that the layer 2 from from 6500-Aggregation-2 have to flow through FabricPath network with this topology. If this is not a desired, you could manipulate STP cost such that the layer 2 flows within IOS network does not flow through FabricPath network.

The STP cost can be configured to a higher value so that 6500-Aggregation-2 prefers 6500-Aggregation-1 instead of vPC port channel.

Hi there

could you please advise the possibility, advantages and disadvantages of using FP as a DCI over OTV assuming the DCI is dark fiber L2 not a L3 core

in case of dual links between DCs

Thanks

A.  For the Data Centers running Spanning tree (vPC) within the Data Center Network

FabricPath

Pros                                      

1. Does not extend Spanning tree (BPDUs) between the two data centers

2. On the FabricPath DCI switches, you need to create only the VLANs that are required to be extended between the two Data Centers

3. Both the links between the Data Center is utilized. Efficient load balancing. You can add more Dark Fibers between the data centers and all the link can be used by the FabricPath load balancing algorithm.

4. VLAN scalability for FabricPath is higher than OTV as of this content writing.

5. Resiliency is better than OTV in some failover scenarios.

Cons

1. Spanning Tree Root for the vlans extended needs to be configured on the FabricPath switches in the DCI layer. This may not be a problem. But need to understand the stp states in your network when you deploy this solution.

2. HSRP localization can not be implemented as OTV. However You can have two differnet Gateways at the Data Center 1 and 2 using two different HSRP groups. If server is moved dynamically from

one Data Center to another Data Center, you need to change the default gateway on the servers.

3. If there is an unknown unicast flooding on one Data Center, FabricPath also carries the flooding to the second Data Center.

4. No ARP optimization across the Data Centers

OTV

Pros                                      

1. Does not extend Spanning tree (BPDUs) between the two data centers

2. On the OTV DCI switches, you need to create only the VLANs that are required to be extended between the two Data Centers

3. Spanning Tree Root for the vlans can be at the Data Center Aggregation Layer switches and OTV solution does not influence the STP Root placement.

4. unknown Unicast Flooding is not propagated across the Data Center.

5. ARP optimization between Data Center is available on OTV. If a host is punting tons of ARP requests which again impacts the ocntrol plane of Nexus devices and so it can drop the good arp requests.

   With the ARP optimization, these ARP flood may not be flooded across the OTV to the Data Center 2 and so Data Center 2 Nexus Control plane may not be impacted.

6. HSRP localication is configurable and so you can keep the same HSRP IP (Default Gateway) on both the Data Centers.

Cons

1. Typically two flows (Odd VLANs by OTV-VDC-1 and even vlans by OTV-VDC-2) carry the entire layer 2 traffic flow between the two Data Centes. Hence the load balancing the links is not efficient.

2. VLAN scalability for OTV is lower than FabricPath as of this content writing.

3. Resiliency of FabricPath network is better than OTV in some failure scenarios.

B.   For the Data Centers running FabricPath within the Data Center network

FabricPath

Pros                                      

1. Both the links between the Data Center is utilized. Efficient load balancing. You can add more Dark Fibers between the data centers and all the link can be used by the FabricPath load balancing algorithm.

2. VLAN scalability for FabricPath is higher than OTV as of this content writing.

Cons

1. Since both the Data Centers are already running FabriCPath, connecting them via the FabricPath makes both the Data Center in a single FabricPath Topology. This means all the switches in both the Data Center Network MUST have all the VLANs configured.

2. HSRP localization can not be implemented as OTV. However You can have two differnet Gateways at the Data Center 1 and 2 using two different HSRP groups. If server is moved dynamically from

one Data Center to another Data Center, you need to change the default gateway on the servers.

3. If there is an unknown unicast flooding on one Data Center, FabricPath also carries the flooding to the second Data Center.

4. NO ARP optimization across the Data Centers

OTV

Pros                                      

1. Does not extend FabricPath between the two Data Centers. Hence there are two independant FabricPath domains (Data Center 1 and Data Center 2). Only the vlans that are need to be extended between the data centers can be configured and extended via OTV.

2. unknown Unicast Flooding is not propagated across the Data Center.

3. ARP optimization between Data Center is available on OTV. If a host is punting tons of ARP requests which again impacts the ocntrol plane of Nexus devices and so it can drop the good arp requests.

   With the ARP optimization, these ARP flood may not be flooded across the OTV to the Data Center 2 and so Data Center 2 Nexus Control plane may not be impacted.

4. HSRP localication is configurable and so you can keep the same HSRP IP (Default Gateway) on both the Data Centers.

Cons

1. Typically two flows (Odd VLANs by OTV-VDC-1 and even vlans by OTV-VDC-2) carry the entire layer 2 traffic flow between the two Data Centes. Hence the load balancing the links is not efficient.

2. VLAN scalability for OTV is lower than FabricPath as of this content writing.

3. Resiliency of FabricPath network is better than OTV in some failure scenarios.

Hello, I do have a couple of questions, any info would be greatly appreciated.


  1. In a N7K / N5K / N2K datacenter environment using FabricPath (both on the N5k and de N7K), and classic Ethernet devices connected using vPC and/or orphan ports. Could please provide me with an overview indicating which spanning-tree features to use where?
  2. Are private vlan’s supported on FabricPath ?
  3. With classic Ethernet I can use the ‘allow vlan-list’ to limit the access to vlan’s. Is there a FabricPath equivalent?
  4. Can FabricPath / IS-IS be tuned to sub-second convergence? And if yes, could provide me with an example?

Thanks

Hi Hielke

Please see my repsonses inline

In a N7K / N5K / N2K datacenter environment using FabricPath (both on the N5k and de N7K), and classic Ethernet devices connected using vPC and/or orphan ports. Could please provide me with an overview indicating which spanning-tree features to use where?

The Spanning tree features on CE ports still has the same best practice. For eg:

-Configure ports going to servers as edge or edge trunk and bpduguard, etc

You just need tol follow the standard spanning tree best practices for host ports. The only requirement is that you need to make sure that root of the vlan is always a switch in Fabricpath Domain.

For more information regarding interaction of spanning-tree and fabricpath.

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c07-728188.pdf

Are private vlan’s supported on FabricPath ?

Yes, Private vlans are supported on Fabricpath but with the following restrictions:

–All VLANs in a private VLAN must be in the same VLAN mode; either CE or FP. If you attempt to put different types of VLANs into a private VLAN, these VLANs will not be active in the private VLAN. The system remembers the configurations, and if you change the VLAN mode later, that VLAN becomes active in the specified private VLAN.

–FabricPath core ports cannot be put into a private VLAN.

With classic Ethernet I can use the ‘allow vlan-list’ to limit the access to vlan’s. Is there a FabricPath equivalent?

There is a feature called Multi topology which can be used for manually pruning certain vlans to certain switches. Not all the Nexus switches supporting this feature of this content writing. I think as of now we have it on Nexus 5500 but not on Nexus 7000 n roadmap)

Can FabricPath / IS-IS be tuned to sub-second convergence? And if yes, could provide me with an example?

Yes, It is possible. There is an excellent best practice document which got published recently (I think yesterday)

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c07-728188.pdf

In "Tune Timers for Fast Convergence" section, they have mentioned an example and more details can be found "Timer Tuning"

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: