Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi all,

we will interconnect 3 Datacenters in 3 different locations connected with DWDM.

The idea was to use 2x NX 5548 in each location to setup a FP Cloud between the sides.

But I am not sure, if we can use NX5500 in spine and leaf mode !

Or do we need a N7K pair as spine.

The documentation is not 100% clear at this point.

Would be great if someone can answer that ;-)

Ciao Andre

27 REPLIES
Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

I want to use exactly the same design.

Just like spanning-tree, you can have any topology you want.

Leaf and spine architectures are just the state of the art for HPC networks to have a predictable latency from any peer to any peer.

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

We will order the stuff in the next few days, and let you know the expierences with the FB cloud, during the implementation.

I also got the information that the N5500 are valid for designs like this, but you need to keep in mind, if you want to use L3 Modules and HSRP in that topolgie.

HSRP Active-Site / Active-Site won´t work with more that two DC Sites, or if there are additional FP Switches in the topologie !

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

You can use inter site FHRP isolation if intervlan routing os owned by another devices, and if you apply a PACL or VACL on the Nexus 5500, at the edge ports (classical ethernet) of the trill cloud.

Dee
New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi Andre,

what did you end up doing? did you get any more info on this?

Cheers,

Dion

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi Dion,

I got the information from Cisco that FP is approved only with the N5500,

so I ordered the hardware for the new DCI between the DCs (6x5500).

It should arrive mid of May !

Then we can start with the testing.

Ciao Andre

Dee
New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Nice.. enjoy those toys

so are the 5548UP capable running as spine?

Most design reference are for two DCs have you found one for three and more DCs.

Concept looks easy.. but when you start looking at detail connections and try to understand traffic flow... gives me headache.... LOL ..

Dion

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

HI all,

we set up now our Lab for the DCI Fabricpath testing !!!!

We configured the Fabricpath edge towards the OLD world with VPC+ as remommended,

no use of Layer 3.

The result is a very very slow convergenc time of 2,5 sec if  VPC+ (Portchannel) memberlink will go down,

and even again 2,5 sec when the link comes back !!!!

We reconfigured the edge with RSTP towards the old Network, and the convergence time was, as estimatet subsecond,

arrounf 400-700 msec.

Why ist VPC+ with Fabricpath that slow, does anybody has the same expierences ?

Hope to get an answer, with a convergenc time of total 5 sec, we can´t implement that in a DCI environment !!!!!

Ciao Andre

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

What does it happen if you try without VPC+ ?

VPC has longer convergence times than VSS or other technologies

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi Surya,

we configured it already as RSTP connection,

but we will replace the edge systems in the old World soon with N7K (old c6500), and for that we would like to use VPC+ / VPC between the DCI edge switches and the old world with STP only, we didn´t migrate yet to VPC in the old world ...

I tested a few monthes ago VPC and MEC with 2x N5k and 2x C6500 (VSS),

and the convergence time was subsecond !

So we ar not very happy with the situation yet, because 2,5 sec in a DC environment is to long !!!!

Ciao Andre

Silver

Re: FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi.

2.5 seconds will not impact you so much; best practices for HSRP timers are still hello=1 / hold=3 because near all TCP applications won't face any issue with 3 or 4 seconds of disruption.

It will have only an impact if you use a lot of non TCP apps or very specific TCP-based apps.

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi,

we implement here a DCI interconnection between 3 sites  only on Layer 2 !!!

It is a layer 2 desaster recovery, for Vmotion of ESX VM.

This 2,5 sec (unplugg) + 2,5 sec (plugging) will  impact our DC environment very heavy.

The old setup with Nexus and STP has convergence time of subsecond, so we cant implement a new technology that is slower .....

We will see ....

Ciao Andre

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

What do the timestamped logs show ? Can you investigate where the time is consumed for these 2,5 seconds.

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

So as a result, RSTP is much faster than VPC in convergence.

I did a debug on the ports on the Nexus N5k DCI in VPC mode:

Logging shows, a link down and the time until the Portchannel recognize the link failure.

It takes 3,5 sec !!!!

Between the first and second ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down

----------

2012 Sep 4 11:05:10.927 DCI-FP-001 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel11: first operational port changed from Ethernet1/1 to none

2012 Sep 4 11:05:10.930 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel11: Ethernet1/1 is down

2012 Sep 4 11:05:10.930 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_DOWN: port-channel11: port-channel11 is down

2012 Sep 4 11:05:10.951 DCI-FP-001 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down (No operational members)

2012 Sep 4 11:05:14.507 DCI-FP-001 %ETHPORT-5-IF_DOWN_LINK_FAILURE: Interface Ethernet1/1 is down (Link failure)

2012 Sep 4 11:05:14.533 DCI-FP-001 %ETHPORT-5-IF_DOWN_PORT_CHANNEL_MEMBERS_DOWN: Interface port-channel11 is down (No operational members)

reconnect the link

It takes 2 sec

-----------

2012 Sep 4 11:13:43.517 DCI-FP-001 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel11: Ethernet1/1 is up

2012 Sep 4 11:13:43.523 DCI-FP-001 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel11: first operational port changed from none to Ethernet1/1

2012 Sep 4 11:13:43.652 DCI-FP-001 %ETHPORT-5-IF_UP: Interface Ethernet1/1 is up in mode trunk

2012 Sep 4 11:13:45.415 DCI-FP-001 %ETHPORT-5-IF_UP: Interface port-channel11 is up in mode trunk

I did the same with RSTP and hat link down notification inbetween 200 msec !!!

Ciao Andre

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Do you have any carrier-delay on some interfaces ? Can give us the config of the DCI facing ports ?

Remind that Fabricpath over VPC is not supported as far as I know. DCI-facing links should not be port-channels in VPC; with VPC+ there is a FP forwarder elected into the VPC+ domain.

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

HI Surya

we tested already to configure carrier delay on the c6500, the N5K does not have a similar command,

only the link debaounce timer, but that didn´t improve the link down notification.

We don´t use Fabricpath via VPC , the VPC connection is at the edge of the FP cloude towards an old STP DC cloud !

We configured the VPC+ with a virtual switch ID a recommendet !

Ciao Andre

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

This is our setup, may that will help ;-)

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

did you try to play with the "delay restore" parameter in your VPC domain ?

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

No improvement !

Configured restore delay 1 sec

Still the same convergence time of 2,5 sec unplugging + 2,5 sec plugging the link back !

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

How do you check the time of reconvergence ? Using the timestamp in the logs or by sending real traffic for your test ? Do you see real traffic loss of near 3 seconds ? I don't have a Nexus 5k on my desk currently but what I find strange is that the interfaces pushes it's state immediatly to the STP process when running STP and not in the case of VPC. Do you see the same delay if running STP over a traditionnal port-channel ?

Can you send the output of a "show run int e1/1" and "show run int po11" ?

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi Surya,

we measure with a IXIA traffic generator.

I measure arround 2500 - 2700 msec at the link down, and arround 2000 -2300 msec at the link up.

The link down an up with STP is arround 200 msec.

Need to reconfigure it to send logs.

Sh run configs will follow !

Ciao Andre

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

!Command: show running-config vpc

!Time: Wed Sep  5 10:09:48 2012

version 5.1(3)N2(1a)

feature vpc

vpc domain 10

  role priority 1024

  system-priority 1024

  peer-keepalive destination 10.158.100.101 source 10.158.100.100

  delay restore 1

  auto-recovery

  fabricpath switch-id 10

  ip arp synchronize

interface port-channel10

  vpc peer-link

interface port-channel11

  vpc 11

DCI-FP-001#

DCI-FP-001# sh run int eth 1/1

!Command: show running-config interface Ethernet1/1

!Time: Wed Sep  5 10:08:56 2012

version 5.1(3)N2(1a)

interface Ethernet1/1

  description DC1-Backup, Te0/1#U

  switchport mode trunk

  switchport trunk allowed vlan 2-3967,4048-4093

  storm-control broadcast level 3.00

  storm-control multicast level 3.00

  channel-group 11 mode active

DCI-FP-001# sh run int po 11

!Command: show running-config interface port-channel11

!Time: Wed Sep  5 10:09:37 2012

version 5.1(3)N2(1a)

interface port-channel11

  description DC1-Backup, Po10#U

  switchport mode trunk

  switchport trunk allowed vlan 2-3967,4048-4093

  spanning-tree port type normal

  spanning-tree guard root

  spanning-tree bpdufilter disable

  storm-control broadcast level 3.00

  storm-control multicast level 3.00

  vpc 11

DCI-FP-001# sh vpc

Legend:

                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                   : 10 

vPC+ switch id                  : 10

Peer status                     : peer adjacency formed ok     

vPC keep-alive status           : peer is alive                

vPC fabricpath status           : peer is reachable through fabricpath

Configuration consistency status: success

Per-vlan consistency status     : success                      

Type-2 consistency status       : success

vPC role                        : primary                      

Number of vPCs configured       : 1  

Peer Gateway                    : Disabled

Dual-active excluded VLANs      : -

Graceful Consistency Check      : Enabled

vPC Peer-link status

---------------------------------------------------------------------

id   Port   Status Active vlans   

--   ----   ------ --------------------------------------------------

1    Po10   up     2-501,2000-3499                                          

vPC status

---------------------------------------------------------------------------

id     Port        Status Consistency Reason       Active vlans vPC+ Attrib

--     ----------  ------ ----------- ------       ------------ -----------

11     Po11        up     success     success      2-501,2000-3 DF: Partial 

                                                   499     

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Strange issue; I have a test acceptance plan scheduled mid september for a new DC with VPC on Nexus 5596; I'll take a look to the convergence time

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi.

Could you find the reason why the link status is reported so slowly ? Any TAC case opened ?

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Hi

NO, we raised a case yet to Cisco´s DC Nexus BU.

Hope we will get an answer soon !

Silver

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Did you test VPC convergence time without VPC+ / Fabricpath ? By just building a port-channel to another switch with pure ethernet and traditionnal VPC.

New Member

FabricPath-Datacenter-Interconnection-only with Nexus 5500

Yes and No, I did a test in another environment N5K and C6500 (sup720 as VSS).

MES on C6500 and VPC on N5K site, and we had subsecond convergence time.

Our concern is that we would like to use FP for the DCI connections, so we need a solution for the issue.

New Member

Hi Andre,Did you experience

Hi Andre,

Did you experience any issue with your previous layer 2 interconnection between the three datacenters ?

Just wondering because this is an option I'm considering to connect three dc ...

Thanks, luigi.

5665
Views
0
Helpful
27
Replies
CreatePlease to create content