cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1973
Views
0
Helpful
20
Replies

Server Farm in the MPLS CORE

Hello,

I think I am overcomplicating myself in thinking how can I attach a server farm into my CORE P Switches.

My network in brief has 2 6509 switches as the P routers (10G between them) these then connect to 7609 routers which are my main PEs with Internet connectivity from several providers.

My first questions is: for redundancy should I have my 2 6509 core Switches (P Switches) connected to both my 7600 PE routers ? I mean, currently I have SW1(6509) connected to my RTR1-PE1(7609) and then this connects to multiple Internet service providers and clients(L3 VPNs). My SW1 then connects to my SW2(6509 on the other DC). This SW2 then connects to its own RTR2-PE2(7609) who also connects to multiple Internet service providers and clients(L3 VPNs). My question is should I add links from RTR1-PE1 to SW2 and from RTR2-PE2 to SW1 ?

My Core switches (SW1 and SW2) are pure P routers (they don't run BGP) and I want to attach a couple of switches (acting as access switches) with 3 or 4 server farms. My question is:

- I want these server farms to be accessed by some clients, therefore I need to put these server farms in VRFs, now how should I do this ? I'm thinking about 2 approaches

1 - Switching the traffic to the PEs and then (somehow ?) stick the server farm lans into VRFs

OR

2 - Configure BGP on the 2 core switches and then bang them farms into the VRFs

3 - Configure the Access Switches as PE Switches/Routers and go from there ...

What do you think would be the better approach ? Or any other ?

Thanks in advanced

Nuno Ferreira

1 Accepted Solution

Accepted Solutions

Hello Nuno,

I agree for P role you just need an MPLS switching capable GE interface.

the VPLS capable interface/linecard is needed only on the PE nodes

This is another advantage of MPLS.

Hope to help

Giuseppe

View solution in original post

20 Replies 20

Giuseppe Larosa
Hall of Fame
Hall of Fame

Hello Nuno,

first question:

if you currently have only one horizontal link between HQ and DC it is wise to add some redundancy:

possible choices are :

a direct link between RTR1-PE1 and RTR2-PE2 with ip routing and mpls enabled on it and with an IGP metric that makes it a backup link.

adding two links RTR1-PE1 to SW2 and RTR2-PE2 to SW1 with ip routing and mpls enabled

2) it depends if you want to keep on with this design where SW1 and SW2 are only P nodes or you accept to make them PE nodes too.

To be noted this is not a problem at all: a node can act as P for some traffic flows / LSPs and can be PE for other flows/LSPs.

So all options here are possible.

the current PE nodes can act as PE nodes of the serverfarms too.

probably option 3 can be less attractive because it can require better devices to act as PEs.

Other variations are possible including usage of Multi VRF CEs as access layer l2 only switching in the core switches towards the current PE nodes

Hope to help

Giuseppe

Giuseppe,

Thanks for your reply and ideas.

Regarding point one I was already thinking in adding the extra links from the main PEs to the CORE Switches. My idea was to link RTR1-PE1 to SW2 and RTR2-PE2 to SW1 keeping the switches only P switches. If I make the switches PE Routers for some traffic will it decrease the performance ? What problems may I get from that ? Or what benefits may I get form that ? Is it OK for having a MPLS CORE without any PURE P Routers/Switches ?

Regarding point number 2, If i make the 6509s PE Switches then how do I create the server farms ? This question is because the Access Switches I wanted to use to create the Server Farms I wanted them to ONLY SWITCH up to the 6509s .. this means that the 6509s would be the gateways for those server farms. Is it just a matter of injecting the connected routes in a VRF and thats it ?#

Thanks again

Nuno Ferreira

Hello Nuno,

performances are not decreased by mixing the P and PE role and it is possible to have a working MPLS network without any pure P nodes in the inner core.

For example years ago for a mobile phone provoder we built a network where both GSR 12000 and old and glorious c7500 were PE nodes.

At the beginning the GSR were P nodes but we faced performance issues and we had to move some VRFs with more traffic volume to the GSRs.

about point 2:

if you put the server farms in a VRF whatever PE nodes you use you need then to use multiple route targets to have intended users to be able to access them.

You may want to split servers that need/provide internet access from servers for internal use only.

So you need to use complexity to achieve the desired connectivity model.

Other possible options include: you may want to protect the servers with service modules like FWSM or ACE for example that adds other complexity.

The FWSM can be handy to interconnect the servers with the global routing table.

if you look at a campus design model for enterprise they suggest you to have two distribution switches dedicated to the server farm.

www.cisco.com/go/srnd

So putting the server farm in one or more VRFs requires increase in complexity.

You can get better security and confinement / control so it can be wise to do it.

Hope to help

Giuseppe

Giuseppe,

Thanks again for your quick reply to my questions :)

My issues about breaking the model is also in regards to VPLS as I want to still be able to deliver VPLS connectivity but I guess making the core switches PE routers (or partial) will not break this functionality.

Also in regards to the modules you refer I already have FWSM blades in each of the Internet Facing 7600s and ACE blades in the 6509 switches (P Switches/routers) so in regards to that I don't have any issues/problems.

I understand what you say in regards to have distribution switches for the server farms but I wont get a budget to get 2 more 6500s or anything else apart the access switches that's why I was considering this collapsed core approach where the current 6509s (P routers - eventually PE in the future) can do the CORE and Distribution functions

If I make the 6509s (P Switches/Routers) PE as well I will have to run MP BGP in them (they only run the IGP at the moment) and eventually have loads of BGP routes ..

What do you think ?

Thanks again

Nuno Ferreira

Hello Nuno,

>> I want to still be able to deliver VPLS connectivity but I guess making the core switches PE routers (or partial) will not break this functionality.

your understanding is correct VPLS will not be broken

>> ut I wont get a budget to get 2 more 6500s

reasonable

>> I will have to run MP BGP in them (they only run the IGP at the moment) and eventually have loads of BGP routes ..

you can control how many routes are learned by the nodes: for example you can configure a per VRF maximum number of prefixes.

Internet access to an MPLS VPN can be given without providing a full table but just a default route towards the internet gateway.

Hope to help

Giuseppe

Giuseppe,

Thanks again.

You are right regarding the Internet Routes as I keep them only on my 7600s ... then create a VRF but do not inject the full internet table into it ... I just insert a default route.

So lets do a brief, you think the best option i have to deliver what I want to be as follows:

- Run MP BGP on the P Switches and make them partial PEs

- Use them as default gateways of the server farms and put the connected routing tables in the VRFs

- Export/Import the VRFs to/from the clients who need access from these server farms

- For server Farms with Internet Access I will place them in DMZs protected by the FWSM blades on the 7600s

Just another question as i may want/need to have the server farms also firewalled from customers ... is it possible to have the traffic routed via the 7600's FWSM blade when coming from the clients ? instead of going directly to the 6500s ? Without adding delay or latency ?

Thanks

Nuno Ferreira

Hello Nuno,

>> i may want/need to have the server farms also firewalled from customers ... is it possible to have the traffic routed via the 7600's FWSM blade when coming from the clients ? instead of going directly to the 6500s ? Without adding delay or latency ?

This is possible:

on the C7600 you need two VRFs and a FWSM routed context.

the game with multiple RTs is done on the "outside" VRF that will redistribute the static routes of the server farm ip subnets.

The inside VRF using other RTs communicate with VRFs on core switches where the server farms are placed/connected.

The FWSM routed context needs static routes as well.

Note:

some addtional delay is unavoidable.

Hope to help

Giuseppe

Giuseppe,

I think i understand what you are saying ... which is besides the Server Farms are connected top the CORE Switches and that they will have PE funtions i would only distribute the routes on the 7600s ... This way the clients would see the servers comming from the PE1 and PE2 and send the traffic there where i can make it go through the FWSM as you explain above ..

Im attaching the design that i have produced regarding all this we have been discussing for your analisis ..

Thanks again

Nuno Ferreira

Hello Nuno,

Your understanding is correct about how to put the FWSM on the data path.

It is a nice diagram that represents your scenario.

note:

being the new cross links only 1GE you should manipulate the IGP metric to reflect this. (probably this is not necessary with EIGRP and OSPF, however I've reported here for completeness)

Hope to help

Giuseppe

Giuseppe,

I was thinking that as i will have to have the server farms firewalled and advertised only on the 7600s as we discussed previously then i may not need to add the PE functianlity to the P Switches right ?

I can route the traffic to the 7600s via IGP and then create the VRFs/VPNs for the Server Farms in there, right ?

Thanks for your help

Nuno Ferreira

hello Nuno,

>> I can route the traffic to the 7600s via IGP and then create the VRFs/VPNs for the Server Farms in there, right ?

I don't think so, if the server farms are in GRT(global routing table) you are not sure that the FWSM are not bypassed by going directly to the C6500 core switches.

As a low profile alternative you can think to use the Core switches only as L2 switches and to report the server vlans on the C7600 using a L2 trunk link: in this way you can have the l3 entry/exit point on the C7600.

Hope to help

Giuseppe

Giuseppe,

Just another question that i think you may be able to answer quite quickly :)

Take my design for reference. ALL my 7600s have ES20 ether cards in them for VPLS

Do i need in the 6500 a VPLS capable card (sip400/600) to allow VPLS from client A-B (connected to different PE Ruters) ??

I think for VPLS we just need to have VPLS capable cards on the PE device's interface that connects to the CORE ... right ? If this is the case then i can connect my PEs ES20 card to any GE Ethernet based card in the 6500s and be able to do VPLS ... Right ?

Thanks

Nuno Ferreira

Hello Nuno,

I agree for P role you just need an MPLS switching capable GE interface.

the VPLS capable interface/linecard is needed only on the PE nodes

This is another advantage of MPLS.

Hope to help

Giuseppe

Cool .. My thoughts exactly ..

Thanks you very much for your help Giuseppe

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: