Hi everyone, We are adding a second site and want to interconnect with primary (running FP on two 7010 boxes), we are going to have a 2x10G DWDM links to secondary which is going to get only one 7010 box. For DCI is best scalable OTV, but in this case and thinking that we own the DWDM can we consider deploy FP instead of OTV for DCI?
I also know that with OTV can we configure HSRP with same IP on two DC, can we do the same with FP? i believe not but just want to hear if someone achieve similar results with this config.
I'm propossing DCI with vPC as you mention with HSRP (same group) on both sites and filtering messages to keep both active, in this way I can ensure egress path optimization.
OTV seems unnecesary to me, I know that we earn some benefits using it but doesnt seems to be necesary, at least from my point of view.
FP came into action cause actually we're running FP on DC1 (the one with the 2 N7K boxes) and someone has the brilliant idea to use for interconnection instead of OTV, the only real benefit that I see of FP over OTV is the link utilization (load balancing).
I believe that the best design here is with vPC, also considering that we are going to have some workload mobility down this so keeping same DG on FHRP it's a must and FP cannot acomplish that, that brings only two options for me OTV and vPC and comparing pros/cons I believe that vPC is better.
The idea behind this post was discuss if this seems correct to you, and also hva solid background to decline FP option (again, it doesnt have any sense to me).
I would say it;s all about requirement, OTV actually design for DCI, with vPC we still have limitation to send Layer 3 traffic, you can use alternate approach like having separate L3 link between two DC for L3 traffic
Below document covers both scenario for DCI, with vPC and with OTV
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...