cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1262
Views
0
Helpful
8
Replies

Routed Access Layer Design Question in Data Center

nsurginer
Level 1
Level 1

I'm working on a network build out for a small sized data center that's primarily a colocation provider.  I'm looking to implement a layer 3 routed campus design but had some questions about the access layer that I can't find the answers for.

I'm using dual 6506 sup720s as the core, dual 4607 supVs on the distribution and dual 4506 supIVs for the access layer.

I want to be able to give all colocation clients A/B Internet drops from two 4506s, but since it's layer 3 at the 4506 and they will be stubs (as in not connected to each other), I'm not sure how to handle the a/b drops without using something like HSRP and vlans between the switches to handle the redundancy.  I'd like to know specifically what kind of options I have using this model and offering redundancy to the client in the form of a hot swap between the 4506s in the event I bring a access switch down, or there is a failure, etc.

Thanks...

1 Accepted Solution

Accepted Solutions

nsurginer wrote:

That's what I was looking for!  Basically I can trunk between the 4506s and then use the SVIs to create an HSRP group to get the vips needed.  Do you see any pitfalls to doing this?  Although I'm okay with having a single chassis with dual sups, some of our clients may not, so I'd like to stay away from doing that even though that's an easier and cleaner route.

No pitfalls as such. You are obviously running STP on those 4500 switches with a L2 trunk which with a true routed access-layer you wouldn't, or at least you wouldn't be using L2 trunks but like i say these vlans are only going to be on the 4506 switches and the uplinks to the distro layer will be L3 routed connections so you are not extending L2 from the access-layer to the distro layer.

Generally speaking firewalls, routers connecting to WAN etc. would be connected to the distro layer or more specifically the aggregation layer in a DC and if you need to provide segregation between customers etc. you can use firewalls/vrfs etc. to keep their traffic separate. However this doesn't mean there is anything wrong with what you are doing and without knowing a lot more about the design in terms of what the aggregation layers specific purposes are what you are proposing is fine.

Jon

View solution in original post

8 Replies 8

Collin Clark
VIP Alumni
VIP Alumni

Having two drops at the desk can simplify the design. At the access layer I would use 6500's and use VSS. At the PC level I would add a second NIC and MEC to the 6500's. Using etherchannel you can remove a link/switch w/o bringing down the client. If you're forced into the 4500's at the access layer, then you will need to share the VLAN and use a redundant gateway protocol (HSRP/VRRP/GLBP).

Hope it helps.

Thanks for the reply.  VSS is nice, but it's a bit out of the question right now.


Here's a better idea of what I'm trying to achieve:

^     ^  (to core)

| /  \ |

0----0  (distribution layer)

| \  / |

|  / \ |

O    O (Access Layer)

\    /

  \  /

   o  (CPE in rack - Switch or redundant CPE switches)

So traditionally the access layer would be layer 2 and then you'd have HSRP between them to provide a virtual gateway IP.

But with the layer 3 design, the vlans are isolated, and I'm not sure how to provided failover and a virtual single gateway IP for them to configure on their routers/firewalls, etc.

nsurginer wrote:

Thanks for the reply.  VSS is nice, but it's a bit out of the question right now.


Here's a better idea of what I'm trying to achieve:

^     ^  (to core)

| /  \ |

0----0  (distribution layer)

| \  / |

|  / \ |

O    O (Access Layer)

\    /

  \  /

   o  (CPE in rack - Switch or redundant CPE switches)

So traditionally the access layer would be layer 2 and then you'd have HSRP between them to provide a virtual gateway IP.

But with the layer 3 design, the vlans are isolated, and I'm not sure how to provided failover and a virtual single gateway IP for them to configure on their routers/firewalls, etc.

That was one of the points i was getting at. If the end devices are PC's then you don't need HSRP simply because if L3 switch fails then so does the PC. But if you are talking about servers where you dual hone them then see my other post.

Jon

Jon Marshall
Hall of Fame
Hall of Fame

nsurginer wrote:

I'm working on a network build out for a small sized data center that's primarily a colocation provider.  I'm looking to implement a layer 3 routed campus design but had some questions about the access layer that I can't find the answers for.

I'm using dual 6506 sup720s as the core, dual 4607 supVs on the distribution and dual 4506 supIVs for the access layer.

I want to be able to give all colocation clients A/B Internet drops from two 4506s, but since it's layer 3 at the 4506 and they will be stubs (as in not connected to each other), I'm not sure how to handle the a/b drops without using something like HSRP and vlans between the switches to handle the redundancy.  I'd like to know specifically what kind of options I have using this model and offering redundancy to the client in the form of a hot swap between the 4506s in the event I bring a access switch down, or there is a failure, etc.

Thanks...

It's not clear what you mean by access-layer in the data centre. Do you mean for servers etc ?

If so it is not a very good idea to use L3 from the access-layer. Whilst a routed access-layer works well in a campus environment it really isn't suited to a DC design for 3 primary reasons -

1) lack of vlan flexibility. A routed access-layer confines the scope of the vlan to one or a pair of access switches In a DC you want maximum flexibility in terms of being able to move servers around etc. If for example you need to move a server from one rack to another or to another part of the DC you would need to readdress the server with a routed design which is just not practical.

2) deployment of services such a firewall/load-balancing. If you deploy a routed access-layer you cannot use transparent mode for these services between the distribution layer and the access layer which can be very useful

3) NIC teaming requires L2 adjacency between the 2 switches you have connected the server to.

Within a DC you want to be able to easily deploy and relocate services and L3 in the access-layer will trap you into a design that may not scale to what you need it to. RSTP is not as quick as L3 failover but it's not far off.  If you read most of the DC design docs Cisco still recommnend L2 in the access-layer in DCs and i have found from my experience that it is a much better solution.

Note with the advent of VSS (6500) and VPC (Nexus) you can run L2 with STP only providing a fail safe rather than being an active participant in the L2 topology but if you are talking about a small DC they are probably not relevant at the moment.

If i have misunderstood and you are not talking about L3 access-layer within the DC then by all means ignore all of the above

Jon

nsurginer
Level 1
Level 1

Jon,

Thanks for the reply.

Servers and PCs won't be connecting directly to the 4506s, they are simply the connecting point for CPE equipment in their racks. Be it switches, or firewalls, etc. on their end.  My hang-up is I have no way of providing them with a virtual gateway IP that's redundant (like HSRP) that goes back to both access switches in the routed access layer that I know of.  That's really what it boils down to and the main question I want answere to know if this design will work or stick to a layer 2 access layer.  So how to provide a virtual IP at the access layer in this design to the CPE.

As far as our DC support infrastructure and our own server clusters, they will also be connecting to seperate internal 4506s that will be layer2 (with HSRP) and route up to the distribution layer via failover firewalls.  The core will also be utilizing EIGRP, and BGP to multihome to our ISPs.

nsurginer wrote:

Jon,

Thanks for the reply.

Servers and PCs won't be connecting directly to the 4506s, they are simply the connecting point for CPE equipment in their racks. Be it switches, or firewalls, etc. on their end.  My hang-up is I have no way of providing them with a virtual gateway IP that's redundant (like HSRP) that goes back to both access switches in the routed access layer that I know of.  That's really what it boils down to and the main question I want answere to know if this design will work or stick to a layer 2 access layer.  So how to provide a virtual IP at the access layer in this design to the CPE.

As far as our DC support infrastructure and our own server clusters, they will also be connecting to seperate internal 4506s that will be layer2 (with HSRP) and route up to the distribution layer via failover firewalls.  The core will also be utilizing EIGRP, and BGP to multihome to our ISPs.

If you need VIPs for the CPE equipment then you could connect the 2 4506 switches together and run a L2 trunk between them and still connect them via L3 to the distro switches. This would at least constrain the L2 broadcast domains for the L2 vlans to the 4506 switches themselves.

The other alternative is to simply replace the 4506 switches with a 4507-R or 4510R and don't bother with HSRP etc. Obviously this only supplies you with supervisor redundancy and not physical chassis redundancy.

Jon

That's what I was looking for!  Basically I can trunk between the 4506s and then use the SVIs to create an HSRP group to get the vips needed.  Do you see any pitfalls to doing this?  Although I'm okay with having a single chassis with dual sups, some of our clients may not, so I'd like to stay away from doing that even though that's an easier and cleaner route.

nsurginer wrote:

That's what I was looking for!  Basically I can trunk between the 4506s and then use the SVIs to create an HSRP group to get the vips needed.  Do you see any pitfalls to doing this?  Although I'm okay with having a single chassis with dual sups, some of our clients may not, so I'd like to stay away from doing that even though that's an easier and cleaner route.

No pitfalls as such. You are obviously running STP on those 4500 switches with a L2 trunk which with a true routed access-layer you wouldn't, or at least you wouldn't be using L2 trunks but like i say these vlans are only going to be on the 4506 switches and the uplinks to the distro layer will be L3 routed connections so you are not extending L2 from the access-layer to the distro layer.

Generally speaking firewalls, routers connecting to WAN etc. would be connected to the distro layer or more specifically the aggregation layer in a DC and if you need to provide segregation between customers etc. you can use firewalls/vrfs etc. to keep their traffic separate. However this doesn't mean there is anything wrong with what you are doing and without knowing a lot more about the design in terms of what the aggregation layers specific purposes are what you are proposing is fine.

Jon

Review Cisco Networking products for a $25 gift card