I am working with a customer who would like to utilize path isolation in their network using VRF-Lite. I am currently debating between the use of GRE tunnels vs. VLANs between 3 core switches they currently have in place today. This is going to be overlay network on top of what they currently have. The core is all L2 today with 802.1q trunks between each of 3 cores in a ring topology. Closets are single homed into the core throughout.
My question is regarding GRE vs. VLANs. Currently, we are looking at having to deploy 12 VRFs to support 12 seperate network types they would like to isolate. The Access layer switches will trunk to the cores where the core will apply VRFs to specific VLANs based on their role.
Which is going to be a more scalable solution from a performance and adminstration standpoint. GRE, VLANs, or MPLS?
Currently the GRE implementation is going to require that we configure many loopbacks and tunnels on each core in order to get the VRFs talking to each other in each core. The VLAN approach will require 24 VLANs per core (assuming we would go with PTP vs Multipoint for routing inside the VRF).
Any thoughts on which way to proceed? From what i have read GRE is more appropriate when you have multiple hops between VRF tables, which in this case we do not. I am just concerned with loopbacks,tunnels, and then routing on top of that the GRE solution will lack scalability as they add more VRFs. A PTP VLAN will pose a similar problem without the need for loopbacks which should simplify the solution.
Can we use MPLS here and just do PE to PE MPLS and still get the VRF segmentation we need between cores?
I would like eventually migrate the entire core to L3 completely but today we are stuck with having to support legacy networks (DEC/LAT/SNA) and have to keep some L2 in place.
Whats the best approach here?
>> Can we use MPLS here and just do PE to PE MPLS and still get the VRF segmentation we need between cores?
Absolutely yes and this is the best approach:
with MPLS forwarding you need to build an MPLS cloud:
the same LSP path to remote PE can be used for multiple VRFs without any problem of routing ambiguity or loss of segregation.
if the three multilayer switches are connected back-to-back directly the best path will use an implicit null label (pop tag) action.
the frame sent to the remote PE will have a single MPLS label that is that of the VRF that the remote PE has advertised in MP-BGP.
You don't need to have a P node between two PEs to offer MPLS VPN services.
one /two backbone links with MPLS running
MP-BGP and usage of address-family vpnv4
you need a dedicated backbone link in the same VRF (12 for 12 VRFs links don't need to be point-to-point can be LAN segments )
bgp sessions one for each VRF in address-family vrf vrf-name or other routing protocols
not to be considered in your case: it is useful to go through a non mpls network of some other company.
It has only drawbacks in your case:
possible MTU issues
possible fragmentation issues
possible performance issues.
I would stay away from it.
the MPLS scenario is the best choice you need SUP 720 3B or better to support it.
A L3 MPLS backbone can offer also L2 VPN services that should be able to carry legacy traffic too.
However, if your core is a collapsed backbone you can still keep dedicated L2 vlans for legacy for now.
Hope to help
Can this be deployed without interruption of the existing forwarding services today? When you say backbone links, i am presuming that we could just go ahead and run PTP VLANs between switches to overlay this onto the existing infrastructure and then run mpls tag switching on those links only. Is this correct?
introducing MPLS in a network requires some extra thoughts about MTU:
the extra 4 bytes of MPLS VPN label need to be handled correctly.
In your case with back-to-back directlinks between multilayer switches this is easier.
To have the less to null impact to existing services you need to use an access-list to define MPLS LDP labels only for the loopbacks (only one for each node) use /32 ip addresses for the loopbacks.
mpls ldp advertise-labels for acl_name
in this way all existing L3 and L2 services still work as before.
if you use OSPF you need to add
ip ospf network point-to-point
under the loopback interface configuration.
Hope to help
What is the Interraction between MPLS/GRE & VLans here?
Cisco Dont recommend L2 with STP in Such Design because of Latency & Convergence Issues, so the best design approcah is to migrate to a L3.
I do think the best appoach is to be more precise on ur question?
Thanks for your reply. By deploying PTP VLANs on each trunk, unique to that trunk only, you are in effect layering a L3 network over an existing L2 network. A totally L3 network between two cores in this case with no trunking support would require us to use:
1. Multiple Interfaces - 1 per VRF
2. GRE tunnels between cores
VRF-Lite does not support the MPLS forwarding features of a standard PE device. VRF-Lite requires that you connect the VRF routing tables on each device together with some form of layer 3 connection, GRE or VLANs.
Thanks for all your feedback. I have indeed read the document that you have posted here before. I actually have used it as a reference in many of the designs i have done in the past. The question here is not specific to L3 access, which we would like to use however their is no support for VRF-Lite on our edge switches since they are running IP-BASE images. So, we are using the cores to terminate the access layer switches, using L2 VLANs on the switches specific to each LAN closet. In essence, we are still achieving L3 segmentation at the closet level but the actual routing is occuring on the core switches instead. Not the same as L3 access, but its as close as we can get and still get the solution to work out the way we intend.
Check out this document on VRF-Lite. Very good material.
Can I ask you to share the approach you ended up for this design. I am currently working on a similar scenario where I have implemented a VRF Lite end-to-end (with VLANs and some with GREs). As this solution is a very complex and high administartive overhead, looking at the option of MPLS. My issue is I need to collapse PE/CE and edge switches into 6509s. There will be 40+ 6509s in the edges and over 30VRFs for this campus network I am designing.
Any ideas or pointers are welcome.
Thanks in advance.
>> There will be 40+ 6509s in the edges and over 30VRFs for this campus network I am designing.
MPLS forwarding and BGP multiprotocol with these figures are recommended.
Hope to help
I actually ended up with basically the same design you are talking about here except that I ended up adding a couple 6500 +FWSM and NAC L3/L2 CAM/CAS into the mix.
Here is the high level overview
1. Every Closet had a minimum of 6 VLANs - unique to the stack or closet switch - Subnets were created for each VLAN as well - no spanning of L2 VLANs across switch stacks.
2. VLANs were assigned for - Voice, Data, LWAPP VLAN, Guest/Unauthorized, Switch/Device Management, and at least 1 special purpose VLAN - (Lab, Building Controls, Security, etc).
3. Then we trunked all the VLANs back to 1 of 3 cores - 6509s with Sup-720s
4. Each Core 6509 was configured for each L2 VLAN with a L3 SVI (The VLANs configured here were not configured on any other cores - we didn't have available fiber runs to do any type of redundant pathing across multiple cores so it wasn't valid in this design to configure VLAN SVIs on more than one core).
5. Each L3 SVI was assigned to the appropriate VRF based on use - Voice, Data, LWAPP, etc
6. Spanning-Tree Roots for all VLANs trunked to a core were specific to that core - they did not trunk between Cores - no loops
7. Each Core was connected via a L2 Trunk that carried Point to Point VLANs for VRFs traffic - We had an EIGRP AS assigned to each VRF on the link - so we had 6 VRFs and 6 EIGRP AS per trunk.
8. This design occurred on each core x2 as it connected to the other cores in a triangle core fashion.
9. Each of the Cores had a trunk to to 6500 with a FWSM configured - VRF/L3 PTP VLAN design continued here as well
10. The 6500+FWSM was configured with multiple SVIs and VRFs - we had to issue mult-vlan mode on the FWSM to get it to work.
11. Layer 2 NAC was configured with VLAN translation coming into the Core 6500/FWSM for Wireless in L2 InBand Mode - the L3 SVIs were configured on the clean side of the NAC CAM so traffic was pulled through the CAM from from the dirty side - where the controller mapped host SSIDs to appropriate VLANs. We only had to configure a couple host VLANs here - Guest and Private so this was not much of an issue - Private was NAC enabled, Guest VLAN/SVI was mapped to a DMZ on the firewall
12. For Layer 3 NAC we justed used an out of band CAM configurations with ACLs on the Unauthorized VLAN
It worked like a charm.
If I had to do it all over again I would go with MPLS/BGP for more scalability. Configuring trunks between the cores and then having the mulitple EIGRP AS/PTP VLANs works well in networks this small but it doesn't scale indefinately. It sounds like your network is quite large. I would look into MPLS between a set of at least 3-4 Core PE/CE devices. Do you plan on building a pure MPLS core for tagged switched traffic only? Is your campus and link make up significant enough to benefit from such a flexible design?
"It worked like a charm" as said. Let me apprecite the effort you took in designing the network and implementing the VRF-lite concepts, Firewall contexts. Great work done.
Probably, when i am designing the core or backbone for a network, i always keep scalabilty and administartion of the network into considerations.
Shine: You must take the appraoch to implement a full blown MPLS/VPN architecture using MP-BGP.
Merry Christmas to all of you! This was as a great post to read.