cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3154
Views
0
Helpful
28
Replies

Query regarding design and VSS

darren-carr
Level 2
Level 2

Hi all,

I am currently working on a campus redesign for a large site for one of our businesses. The design I am working on ia made up of an access layer (PC's servers, etc) an aggregation layer to provide an aggregation point in each of the large buildings/factories for the edge switches to connect to, and a collapsed distribution/core layer for the aggregation switches to patch to and to prevent a full mesh and complexity should we add any other buildings/factories to the core.  I've been doing a bit of reading on VSS and am confident this is the way forward for the core switches. I was hoping to run Layer 3 routed ports from the core to the aggregation switches, but I need to span a couple of Vlans over the campus (Management/Wi-Fi AP Management).

Currently, all users/servers are in Vlan 1 that spans the whole campus. I am looking to isolate each building/factory so they have their own data vlan (10-40), a shared vlan for Wi-Fi AP management (600) and corporate Wi-Fi (601) and a managment vlan (999).

I'm just after a bit of feedback from people regarding the design and any potential issues I should consider regarding VSS since this is something relatively new to me and a topic I am not 100% confident with at present.

My design has to be simple and easy to support by our operations team.

I am well familiar with the L2 protocols (RPVST, etc), trunking, etc. CS01 would be the root bridge for the network (priority 0) for all Vlans.

My plan is to implement VSS, run L2 etherchannel/trunk from the core to the aggregation switches with explicit Vlan definitions on the trunks.

The cores will be deployed in separate buildings for physical diversity, connected by 2 x 10Gb SMF connections.

The uplinks from the aggregation switches will be 2 x 10Gb. Now I know that this looks like I am massively oversubscribing the cores 8:1 but we won't have such data rates coming from the aggregation layer. This is just future proofing the design. In fact, some locations may choose to just use 1Gb (x2) for the uplinks to the cores (they currently have 2x100Mb). My design is all about future proofing the design and making it more scalable than it is now.

I welcome any feedback regarding the design and comments.

Thanks

28 Replies 28

You're welcome.

I was looking at the 16 port model of the 4500X. I am assuming, that for redundancy, I would connect a pair of 4500X's using two of the 16 ports, leaving me with 14 available ports. I'd also lose a port for the uplink to the collapsed core. Leaving a total of 13 ports per switch. Does this sound correct? The reason I ask is that in some of the buildings there could be over 13 access switches. Just considering whether a model with a higher quantity of ports would be better?

Yes, the two inter-switch links would utilise normal switchports. As for the uplinks to the core, wouldn't you cross connect each 4500-X to each 4500E? If so, this would require two uplinks per 4500-X, not one.

Sounds like the 32-port model would be prudent. Whilst you can always add the 8-port expansion module, it seems a bit tight already, for my liking. And it would cost  a lot to upgrade models later.

Also, if I have access switches that aren't stacked, and I connect them, redundantly to each of the VSS'd 4500X switches in the aggregation layer in a building, would the 4500X VSS stack simply block one of the ports if RPVST is enabled? Surely it would cause a loop otherwise?

I'm not entirely clear on the topology. Will each access-layer switch be individually uplinked? If so, I preume each access switch will be dual-uplinked with one uplink per 4500-X switch. In this scenario, you can use EtherChannel and bundle both uplinks together and negate the requirement for an active spanning tree block on one uplink.

If there will be a daisy chain of access-layer switches, with the top-most and bottom-most switches uplinked to the 4500-X pair, then spanning tree will block one of the links to prevent a layer-2 loop. However, this isn't an ideal design. Better to diversley uplink each individual access switch, or form a stack.

Hi Shillings...

I see, so from the 4500X would you have a single, logical port-channel made up of the 4 uplinks given that the 4500X and 4500E VSS present a single, virtual switch to the network?

Regarding the topology, we mostly have the Cisco 3560 deployed in the access layer, running layer 2 only. Some are connected by a Gigastack cable, but aren't really a stack as we know it today. I'm not entirely sure how many of these switches are staying and going at the site, but they will need connectiving to the 4500X that may be deployed in each building. My plan is to provide connectivity from each access switch to each of the 4500Xs, I want to do away with the daisy chaining of the switches.

We are fortunate in that we do have a number of 2960S stacks in the key areas. These will be re-used in the new design. My plan is to also connect these stacks to both 4500X switches, and as you suggested, use a port-channel to do this. I'm doing a walk around of the site next week to get a full appreciation of the switches and fibre connectivity that is currently deployed.

Thanks again for the information.

I see, so from the 4500X would you have a single, logical port-channel made up of the 4 uplinks given that the 4500X and 4500E VSS present a single, virtual switch to the network?

Yes, the port-channel, comprising 4 physical uplinks, is seen as a single logical connection between the two layers.

The cross connectivity offers the best protection from a single uplink or port failure. Not such a big deal in this case, but if you had an active/standby pair of ASA firewalls, then helping prevent failover across to the standby ASA can be quite important, especially during production hours. I've counted up to a dozen cable/port failures that this design come help protect against.

The previously mentioned white paper nicely illustrates connectivity between core and distribution layers, albiet doubled up on uplinks. See top diagram: -

http://www.cisco.com/en/US/prod/collateral/switches/ps10902/ps12332/white_paper_c11-696802.html

Great. Understood. All that makes sense. Thanks for the whitepaper as well.

One other query I have regarding the VSL between the two cores. As we have Sup-7E's in each of the cores, I was planning on using the 10Gb uplinks on the Sup-7E to connect the two cores. In addition to this I was planning on installing x 2 WS-X4712-SFP+E 10Gb line cards. I will have to upgrade the IOS in the cores to support these. In addition to the 10Gb connections on the Sup-7Es I was thinking of also using one of the 10Gb ports on each of the 10Gb line cards for the VSL link, therefore providing a 40Gb SL link. The core will be deployed in two geographically dispersed rooms, with a distance of around 1Km separating them. The SFPs I plan to use for this should be the same in both the Sup-7E and the line cards. If however there was a slight difference (I am just speculating here), does the switch have the intelligence to detect this and not form the channel using the different SFPs, or does it just see the interface as 10Gb interfaces and will form the channel. The interface settings, speed, type will all be the same. I'm thinking I will use this approach to provide maximum resiliency.

Does this seem reasonable?

Thanks again.

Yes, for best redundancy, your plan is the way to go. Somewhere on the Cisco website is a document detailing just such connectivity. I've had a look, but can't find it, much to my frustration. Maybe it was more aimed at Quad-Sup2T, but certainly using non-Supervisor links for the VSL is recommended, in case you lose a Supervisor.

As for the differing fibre modules, I am not 100% sure, so would rather not presume it is OK, even if I can't see an issue in theory.

I've never worked on any projects or proposals with Supervisors on different sites either, but seem to recall posts on the forum confirming it should be OK. Might be worth a search, if no one responds on this topic.

We use Sup2T across different data centres in VSS mode. Not quite 1km but still in different buildings. Works well and provides physically diverse datacentres and corresponding building links. All fibre, but that's on the 6509E chassis.

Thanks. I think I am going to dig a bit deeper on the Sup-7E's just to be sure. It may well be less than 1Km, I'm just not sure what the total length of the fibre run will be between the cores as I'm not sure what the path will be right now.

Thanks for the heads up though

darren-carr
Level 2
Level 2

Here is my final topology diagram, please excus the Site D port-channel, went a bit funny in Visio.

I'm now happy with the design. Would you recommend anything else? Is there anything you would change? Happy to take any feedback onboard.

Nice diagram. Can't see any issues myself.

What about IP addressing? Will you assign dedicated subnet/s per site? Best to keep each subnet down to a /24. Tempting to have a management VLAN span the entire campus, for ease of use, but do you really want to do that?

Presume you will ensure the core is just for routing and doesn't perform any QoS classification, or the like. On that subject, if the distribution switches perform QoS classification, then this is much harder with 3750-X series. The 3850 and 4500-X series use router-based MQC which is easier to understand and much less restrictive. Legacy 3750 'Switch QoS' (aka 'LAN QoS') will take you a day to get your head around, if you really want to understand it and not rely on AutoQoS. Obviously, the preferred approach is to classify as close to the edge as possible - i.e. your access layer switches.

Hi, sorry for the late reply. I was away on holiday for the weekend.

IP addressing...good question Currently, and this is one of the reasons for this work, the site uses a single Vlan for all users, local servers, etc......Vlan 1 This is obvioulsy something I want to address in the new design. My plan is to allocate a separate data Vlan to each site. My initial plan was to have a single Vlan for management, but as you point out, this would mean spanning the Vlan across the campus. Not a good practice I know. The management Vlan would only be configured on the SVI for the switches, no workstations, etc. I am now thinking however that a better practice would be to subnet the assigned prefix further between the sites. My only concern with this is the size of the subnet I have been allocated and how scalable this is. This is something I am working on. The other issue I have is that we have a corporate SSID that needs to be accessible campus wide and support roaming. My initial thoughts were that another Vlan would have to span this campus, but I am pretty sure if I dig a bit deeper there is an alternate way for doing this.

We don't currently have any QoS deployed at the site, but this isn't to say this is something we won't implement in the future. I take onboard your comments regarding the 3850's, I plan to read up on these appliances today.

The other issue I have is that we have a corporate SSID that needs to be accessible campus wide and support roaming. My initial thoughts were that another Vlan would have to span this campus, but I am pretty sure if I dig a bit deeper there is an alternate way for doing this.

This is called layer-3 roaming. Both Cisco Aironet and Meraki can do this. Aironet creates a tunnel to the original controller. I've not set it up on Cisco Meraki yet. If you have Aironet, then bear in mind that the 3850 series switch includes an integrated WLAN controller that is license activated.

Meraki is brilliant, by the way. It's so much easier to setup and manage, compared to anything else I've used.

Ok, so I found a bit of information on AP groups on the Cisco website. Unfortunately, we aren't using Cisco at the site, but I have a good idea now of what is required from the vendor who is supplying the wireless. This means that I now should be able to terminate the Vlans on the local site distribution switch and consider routing from the distribution to the core, as no Vlans will span the entire campus.

http://www.cisco.com/en/US/tech/tk722/tk809/technologies_configuration_example09186a008073c723.shtml

I'm getting a bit of push back regarding VSS after all this work. Ops guys aren't keen. May have to come up with an alternate solution for now. Just a shame, as you pointed out in an earlier email, that the 3850's don't support a full SFP module. I am now going to have to consider the 3750X or the 450x.

The vendor is Aruba, need to look into the capabilities of this solution. It seems fairly standard (the roaming) and for an enterprise class provider like Aruba I'd expect them to have something similar.

The only issue I can think of now is that the two controllers would be located with the two cores in geographically dispersed rooms. If the design relied on Vlan association with an interface configured on the controller, and the Vlan wasn't defined on the core switch, how this would work (Vlans would be configured and routed on each buildings distribution switch).

As I say, I think I need to speak to Aruba regarding this to get a better understanding of their capabilities.

Going back to the switching/routing topology. We use OSPF to provide  an interconnection between our core switches and our WAN provider. We  facilitate this connectivity by providing a dedicated L2 Vlan for the  interconnection. On each core we then configure an SVI (same number as  the dedicated Vlan) on each core switch. We use this to establish OSPF  connectivity with our WAN provider. We trunk the Vlan between the two  core switches to provide a failover from the primary circuit to the  secondary circuit. Only one router is forwarding at any time. A default  route is advertised by both WAN routers but with the OSPF cost adjusted  (increased) for the route advertised by the backup router.

I was thinking, to avoid spanning vlans over the  campus, and assuming the Wi-Fi provider supports this, to make the  port-channels between the core (4506) and the aggregation (4500-X)  switches a Layer 3 port-channel. The port-channel would be configured  with a /30 subnet. OSPF would then be enabled on each port-channel and  an OSPF relationship would be formed with each aggregation switch over  the port-channel. The networks in each part of the campus would then be  advertised by each of the aggregation routers through OSPF. The OSPF  network type for the connection would be P2P so each aggregation router  would establish only a single relationship.

Does this sound reasonable?

My only concern is the Wi-Fi. If a Wi-Fi interface is configured on the aggregation switch, and the Wi-Fi controllers are connected to the cores, then how the Wi-Fi would work? Would I have to span the L2 Vlan for the Wi-Fi across the campus in this case? Appreciate you may not know Aruba that well, I am just questioning the theory.

Review Cisco Networking products for a $25 gift card