Rapid Spanning-Tree design confirmation

Unanswered Question
Feb 26th, 2008
User Badges:

I am running RSTP and want to ensure that this will provide my desired results. I want all vlan traffic to flow to the core (6500's) from the DCU switches (6500's) with the exception of vlans 10,11,19,20,30-32. I want these vlans to flow through the 20GB trunk between the 2 DCU switches versus going through the core first. I think I have a working plan here, but would appreciate any suggestions or feedback. (The 2 Core switches are currently the root switches for all vlans)


Core 1

int TenGigabitEthernet9/3

description ===to DCU-01 port 8/1===

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

==============================================


Core 2

int TenGigabitEthernet9/3

description ===to DCU-02 port 8/1===

no ip address

switchport

switchport trunk encapsulation dot1q

switchport mode trunk

===============================================


DCU-01

interface TenGigabitEthernet8/1

description ===to core 1 port 9/3===

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 7,9-11,19,20,30-32,70,1002-1005

switchport mode trunk

spanning-tree vlan 10,20,30,32 cost 18

!

interface TenGigabitEthernet8/2

no ip address

shutdown

!

interface TenGigabitEthernet8/3

description ===to DCU-ASW-02 port 8/3===

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 7,9-11,19,20,30-32,70

switchport mode trunk

!

interface TenGigabitEthernet8/4

description ===to DCU-ASW-02 port 8/4===

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 7,9-11,19,20,30-32,70

switchport mode trunk

================================================


DCU-02

interface TenGigabitEthernet8/1

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 7,9-11,19,20,30-32,70,1002-1005

switchport mode trunk

spanning-tree vlan 11,19,31 cost 18

!

interface TenGigabitEthernet8/2

no ip address

shutdown

!

interface TenGigabitEthernet8/3

description ===to DCU-ASW-01 port 8/3===

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 7,9-11,19,20,30-32,70

switchport mode trunk

!

interface TenGigabitEthernet8/4

description ===to DCU-ASW-01 port 8/4===

no ip address

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 7,9-11,19,20,30-32,70

switchport mode trunk




  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 4.5 (4 ratings)
Loading.
lamav Tue, 02/26/2008 - 10:51
User Badges:
  • Blue, 1500 points or more

Hi:


Im assuming that the DCU switches are L2 access switches and that the connections between core switches are L3. If that's the case, what you have is a loop-free U topology model for the access layer, which means:


a.) RSTP will not block any ports (see note 1 below) and all uplinks and interswitch links will be forwarding traffic. So, you will have 2 active uplinks and an interswitch link as a back-up in the event that one of the uplinks fails. The backup will already be in the forwarding state.


b.) The vlans cannot be spanned across the data center, but only across the access layer. An inverted U topology would allow spanning acorss the data center. This may or may not be an issue for you.


c.) Service Modules in the routed layer may experience black holing of traffic. If you will not be deploying service modules, dont worry about it.



Back to your design...


1. One of those 10 Gbps trunks between switches DCU1 and DCU2 are going to be blocked by STP. Why dont you form an etherchannel to avoid this?


2. On the 10G uplinks between DCUs and Core switches, why havent you specifically allowed the vlans you want on the core side?


3. You say you only want to allow vlans 10,11,19,20,30-32 on the inter switch links between the DCUs, but you are also allowing vlan 7 and 9. Why?


4. What are you trying to achieve by forcing traffic from certain vlans to use the 20G connection before traversing the uplink to the core? Anyway, you are allowing the same vlans across the uplinks and the inter switch links.


Maybe a little more background on what you hope to achieve with this design is in order.


HTH


Victor

mcroberts Tue, 02/26/2008 - 11:37
User Badges:

Thank you for your response. You are correct, the DCU switches are layer 2 and the core switches are layer 3.

1. I could create an etherchannel, but would still want the traffic to flow per above.

2. The only reason I allov the core open to all vlans is that it makes configurations easier to manage. I let the core open and lock down each access switch to only permit the required vlans. If a new vlan needs to be added to a switch, I know that I only need apply the change to the access switch vs the access switch and both core switches.

3. I want to allow vlans 10,11,19,20,30-32 flow between the DCU switches because I have server farms sitting within these vlans which reside behind the DCU switches. VLANS 7 and 9 are simply used for mgmt and security, so their traffic only needs to flow back towards the core.

4. The only reason I want the aforementioned vlans to traverse the 20GB trunk is to reduce the number of hops on the network. This way a server on vlan 10 sitting behind DCU-01 can traverse the 20GB trunk and talk to another vlan 10 server sitting behind DCU-02 without going through the uplinks to the core.

lamav Tue, 02/26/2008 - 11:59
User Badges:
  • Blue, 1500 points or more

Hi:


"1. I could create an etherchannel, but would still want the traffic to flow per above."


1. If you want 20G between DCU1 and 2, you must create the etherchannel, its not an option. Just clarifying...


"2. The only reason I allov the core open to all vlans is that it makes configurations easier to manage. I let the core open and lock down each access switch to only permit the required vlans. If a new vlan needs to be added to a switch, I know that I only need apply the change to the access switch vs the access switch and both core switches."


2. Im not sure I understand you. Do you mean that each access switch is carrying a separate set of vlans, ie DCU1-vlans 1-10 and DCU2-vlans 11-20? I don't think that's what you mean, but I just wan tot be sure. If so, you will not achieve L2 adjacency for those server vlans, which will impact network and application high availability. Moreover, if you add a new vlan on the access layer, you will have to also add it to the core layer. That core switch is routing between vlans and therefore a L3/routed SVI for that vlan must be configured on the core -- both cores, in fact, if you're running HSRP. You will also have to add the vlan to the STP domain at the core, if you want to rig the root bridge election for the vlan. So, either way, you will have to enter the core switches and do some configurations. Its also not a good practice to allow all vlans on one end and prune on the other end of the trunk.


"3. I want to allow vlans 10,11,19,20,30-32 flow between the DCU switches because I have server farms sitting within these vlans which reside behind the DCU switches. VLANS 7 and 9 are simply used for mgmt and security, so their traffic only needs to flow back towards the core."


OK, but you are allowing vlans 7 and 9 also, which is something you say you don;t want. That was the point I was making.


"4. The only reason I want the aforementioned vlans to traverse the 20GB trunk is to reduce the number of hops on the network. This way a server on vlan 10 sitting behind DCU-01 can traverse the 20GB trunk and talk to another vlan 10 server sitting behind DCU-02 without going through the uplinks to the core."


OK. So, what you're saying is that vlans 10,11,19,20,30, 31 and 32 are server vlans and you want to allow them on all trunks -- uplinks and the interswitch link? If so, that's fine.



I will tell you this, though, now that I know this is a server farm:


You are correct in wanting to keep L2 traffic off the core. The problem with the loop-free U topology, though, is that it will not scale well if you decide to deploy service modules, like the FWSM firewall or CSM for load balancing. I can get into the reasons for that later, if you want. In that case, the inverted U is better, but will extend the switch domain to the core.


HTH


If so, please rate my post.


Victor





mcroberts Tue, 02/26/2008 - 12:09
User Badges:

Funny you mention the modules...I do have FWSM's as well as CSM modules on their way...would you suggest simply allowing the traffic to flow through the core switches if that is the case?

lamav Tue, 02/26/2008 - 12:27
User Badges:
  • Blue, 1500 points or more

I see. So, I would:


1. Seriously reconsider the loop-free topology and look into a looped triangle topology (each access switch dual-homed to each core switch). This will provide L2 adjacency for your servers, load balancing, by making each routed layer switch the STP and HSRP primary for odd and even numbered vlans, respectively, and will also remove the possibility of black-holing your traffic and causing a "split brain" with your service modules.


2. Discuss with your chain of command the possibility of adding a routed distribution layer and getting rid of the collapsed core. Now, that is not absolutely necessary, and of course this is a considerable investment, but it scales much better and lays the foundation for a establishing a serious data center that has flexibility, scalability and high availability built in. I don't know how serious your organization is about that. And I also don't know the number of servers in your farm. Furthermore, if you are using this server farm core to route your campus traffic, too, than that would be another reason to get rid of the collapsed core.


Remember, typically, a core layer is supposed to do nothing more than HIGH speed L3/CEF switching between your different network modules (campus, server farm and edge distribution). The core is not the place to be implementing policy routing, traffic filtering, or service module-type activities, like firewalling, load balancing or SSL offloading. That is what a distribution layer is for.


HTH


Rate my post if it does.


Thanks


Victor




Mohamed Sobair Tue, 02/26/2008 - 11:50
User Badges:
  • Gold, 750 points or more

Hi,


You could acheive this by highring the spanning-tree cost for vlans 10,11,19,20,30-32 of the of ports 8/1 on both DCU 01,02.


Those switches then takes the least cost towards the root which is 20 GB.


HTH

Mohamed

lamav Tue, 02/26/2008 - 14:28
User Badges:
  • Blue, 1500 points or more

I see. So, I would:


1. Seriously reconsider the loop-free topology and look into a looped triangle topology (each access switch dual-homed to each core switch). This will provide L2 adjacency for your servers, load balancing, by making each routed layer switch the STP and HSRP primary for odd and even numbered vlans, respectively, and will also remove the possibility of black-holing your traffic and causing a "split brain" with your service modules.


2. Discuss with your chain of command the possibility of adding a routed distribution layer and getting rid of the collapsed core. Now, that is not absolutely necessary, and of course this is a considerable investment, but it scales much better and lays the foundation for a establishing a serious data center that has flexibility, scalability and high availability built in. I don't know how serious your organization is about that. And I also don't know the number of servers in your farm. Furthermore, if you are using this server farm core to route your campus traffic, too, than that would be another reason to get rid of the collapsed core.


Remember, typically, a core layer is supposed to do nothing more than HIGH speed L3/CEF switching between your different network modules (campus, server farm and edge distribution). The core is not the place to be implementing policy routing, traffic filtering, or service module-type activities, like firewalling, load balancing or SSL offloading. That is what a distribution layer is for.


HTH


Rate my post if it does.


Thanks


Victor



Actions

This Discussion