Hello everyone, We are currently running a datacenter with 2 7K switches and re route to our existing DR site which is a different network. We are building a new data center close by and will have 2 10 Gig links <1 MS latency. I have heard that OTV may not be the best investment, even though it is easy and separates things like Spanning tree, and your hsrp gateways.
So due to cost, we will connect 2 7K's but we will not use OTV, this will be a VPC trunk to both 7K's. We use HSRP and know how to change this and change spanning tree elections, my question here is should I switch to GLBP, then elect spanning tree based on load? also we use EIGRP to route from the 7K cores with 10.0.0.0, should modify metrics or will eigrp work by default?
VPC as a DCI is a supported design. I would look at this doc closely :
I'm not sure what you're asking in regards to GLBP / spanning tree.
EIGRP should work fine, but again, pay heed to the various gotchas on how to route peer over VPC. Specifically see figures 63 & 64 in this guide. You may need a seperate Layer 3 link depending on your requirements.
We will have 2 10GB links and all of the core vlans will have to span across the trunk.
I am attaching a diagram, both site will have MPLS to route out. I don't see a reason to to do routing between the 2 cores, or am i missing something???
Based on your diagram I can't think of any significant benefits to peer between datacenters, besides maybe a more optimal default route should a firewall die, or connectivity should you lose both MPLS links.
Also, If you're peering to the 3750 stacks in your diagram, you will need to be sure these are seperate layer 3 links, not vpc's.
OK, so they would not be peers, the 2 10 gig links would be a trunk between the 2 with a peerlink. We need the layer 2 connection for vmotion and extend the core vlans. similar to OTV but since there is not layer 3 in between we are just trunking the 2 datacenters.
In this case do we still need to have every vlan trunked across the link? so we can access each vlan from the remote datacenter? of can we have static routes to an SVI from one datacenter to the next?
You would only have to trunk vlans you need to extend, but yes. I believe you should be to static route from svi to svi, as long as traffic isn't destined for another vpc member port, but I think it's still recommended to use dedicated layer 3 links for this.
If I setup the same vlan on both sides but with a SVI but run the same EIGRP process each switch will know routes via EIGRP.
This is the same as running eigrp on a switch that is trunked, correct?
You can peer both 3750's together in your diagram through vpc without problems. Peering 7k's across datacenters on vpc vlans is unsupported. Dedicated layer 3 links are recommended. See pages 73,74 & 77.
I'm sorry I'm not sure I understand, this is similar to OTV but no OTV as this is not a layer 3 connection.
Nexus 1 and 2 are peered, nexus 3 and 4 are peered, there is a trunk between these switches, VPC is just a virtual port channel so it make the 2 link work as one.
Say Nexus 1 and 2 have eigrp routest out to the 3750 layer 3 switch.
So I configure vlan 3 as 10.100.3.251 on 7K1 and vlan 3 as 10.100.3.252 on 7k2. HSRP is 10.100.3.254
vlan 3 runs eigrp 100
N7K3 has vlan 3 as 10.100.3.241 and N7K4 vlan 3 as 10.100.3.242, and runs eigrp 100 and part of HSRP for gateway 10.100.3.254
eigrp will exchange routes, so a show IP route should show routes to one datacenter via interface vlan 3? Just as an example.
So it is layer 2, but EIGRP will share routes through the SVI IP address so I guess it is a hybrid layer 2/3.
I just opened a TAC case as well, LOL we just bough all of this and this was my understanding.
The vpc peer link *is* a trunked port-channel, however it has special rules for loop prevention. Specifically, if a packet arrives over the peer link and is destined for a vpc member port, that packet will dropped if all ports in the vpc port group are up. So ideally you are never sending traffic across the peer link unless there is a link failure in a vpc.
In addition to the original doc I mentioned, this is a good one to understand supported topologies.
If the 3750 switches are connected with a vpc, then what you mention is not supported. If they are connected with individual layer 3 links, then this would be supported. If they are connected with two seperate trunk links, then it would be supported so long as you have a dedicated, non-vpc, spanning-tree link between 7k's.
You can peer betwen 7k's over the vpc peer link, but it is the least recommended practice. Most recommended is a dedicated layer 3 point to point link, or a non vpc trunk or port-channel with peering over vlan svi's.
Both datacenter will have 2 MPLS providers as well, I don't redistibute EIGRP into BGP.
So as I understand it the first diagram in the link above, one 3750 is EIGRP routed, the other switch is a trunk but it is 2 trunk links, but also runs EIGRP. Our 5K's is a VPC trunk, there is an SVI on the 5K.
So I'm still not getting it, sorry, I see the link as still layer 2. The link will be a VPC trunk 2 both datacenters, so say both data centers are connected with a 2 10GB trunk link, I have eigrp 100 on the vlans that are extended. for HSRP it needs to know about the other sides? correct?
EIGRP has still converved? correct? and is aware?
I have created a port channel on 2 4506 switches and both run eigrp, and if I look at the routes of the client switch it see that routes and it is via the SVI VAN1, so the switch forwards traffic out that. IS this not the same?
Say may layer 3 3750 switches on both side have vans on them, if I do a show IP route it should see the different vlans of the other side but via the core? correct?
Or do I have to redistibute EIGRP into BGP so that both datacenters know each others routes?
Figure 7 in the VPC guide is what we are wanting to accomplish., expt we will just have 4 nexus 7k switch total. so the core will provide the DCI