cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3127
Views
0
Helpful
28
Replies

Query regarding design and VSS

darren-carr
Level 2
Level 2

Hi all,

I am currently working on a campus redesign for a large site for one of our businesses. The design I am working on ia made up of an access layer (PC's servers, etc) an aggregation layer to provide an aggregation point in each of the large buildings/factories for the edge switches to connect to, and a collapsed distribution/core layer for the aggregation switches to patch to and to prevent a full mesh and complexity should we add any other buildings/factories to the core.  I've been doing a bit of reading on VSS and am confident this is the way forward for the core switches. I was hoping to run Layer 3 routed ports from the core to the aggregation switches, but I need to span a couple of Vlans over the campus (Management/Wi-Fi AP Management).

Currently, all users/servers are in Vlan 1 that spans the whole campus. I am looking to isolate each building/factory so they have their own data vlan (10-40), a shared vlan for Wi-Fi AP management (600) and corporate Wi-Fi (601) and a managment vlan (999).

I'm just after a bit of feedback from people regarding the design and any potential issues I should consider regarding VSS since this is something relatively new to me and a topic I am not 100% confident with at present.

My design has to be simple and easy to support by our operations team.

I am well familiar with the L2 protocols (RPVST, etc), trunking, etc. CS01 would be the root bridge for the network (priority 0) for all Vlans.

My plan is to implement VSS, run L2 etherchannel/trunk from the core to the aggregation switches with explicit Vlan definitions on the trunks.

The cores will be deployed in separate buildings for physical diversity, connected by 2 x 10Gb SMF connections.

The uplinks from the aggregation switches will be 2 x 10Gb. Now I know that this looks like I am massively oversubscribing the cores 8:1 but we won't have such data rates coming from the aggregation layer. This is just future proofing the design. In fact, some locations may choose to just use 1Gb (x2) for the uplinks to the cores (they currently have 2x100Mb). My design is all about future proofing the design and making it more scalable than it is now.

I welcome any feedback regarding the design and comments.

Thanks

28 Replies 28

JohnTylerPearce
Level 7
Level 7

Future proofing the design is never a bad thing. Just do future proof it, to the point, where you're spending 6 million dollars

Are you planning on using 4500s or 6500s for VSS?

When you're doing an etherchannel, just make sure that you check what hashing algorithm, that you're going to use.

So for instance, if you have two 10gig links in an Etherchannel, you don't have 20Gbps of bandwidth/speed, you have two individual 10gig link, that traffic will decide which link to go on. So depending on the hashing algorithm, one link, may be under utilized. Just a quick FYI.

Also, the distribution switches, are those going to be 3750s?

Another thing to remember is, whatever switch you decide on, 4500/6500, just make sure the sup can support the aggregate bandwidth from each line card, so it's not blocking, and is non-blocking.

I would also include, two to three uplinks, from the access switches to the distribution switches, in an etherchannel as well.

Hi John,

Some good feedback there, thanks.

We are planning to use existing 4500E's for the VSS. They have only been in a year and are running Sup-7E. I need to do an upgrade of the IOS to support the new 10Gb line cards. I can't justify taking these out of the network and replacing them with 6500's. The campus is a reasonable size, but the traffic throughput isn't huge right now. We also get a great discount from Cisco for hardware so the costs aren't so bad

I like to call the distribution switches 'aggregation switches' as I'm not really doing anything here with them above Layer 2, what I am trying to do is mimise the effort if new buildings need to be integrated into the design. By introducing the core if a new building does get added I just need uplinks to the core and not a full mesh around the rest of the buildings. I've got considerable experience using the 3750X's and think they are a good fit for this purpose. Do you have any thoughts on this?

Each access switch will have two uplinks to each aggregation switch, one to each member in the stack.

Thanks

3750x should be ok, as long as the switch fabric works for your bandwidth requirements.

How many ISPs are you going to have? Also what firewall solution are you looking at?

You can also implement a FHRP, such as VRRP, HSRP, GLBP etc....

Just some ideas.. But from looking at the design, it looks pretty good for a base template to either leave as is or improve upon.

Pretty simple network, easy to admnistrator, winner winner chicken dinner.

All sounds good to me. Do you already have the 10GbE linecards for your 4500E core switches? If not, then an alternative is to upgrade to the new Supervisor Engine 8, released a few weeks ago. It has 8 integrated non-blocking 10GbE ports. Probably an expensive alternative, but the Sup8 would also offer you a longer product lifetime than your existing Sup's, plus some more features.

darren-carr
Level 2
Level 2

Hi John,

I'm pretty sure the fabric of the 3750X's will suffice. Currently most of the edge locations are connected to the distribution layer using 100Mbps uplinks, and are hardly scratching the surface of these. Utilisation is low in most areas.

We will have two WAN links out of the campus to our in-country datacentres. At the campus we only switch and route traffic, all firewalling/IPS/IDS, etc is done at the datacentre. On-site will just be local file/print services. We have centralised all of our core applications into the two datacentres. This is corporate policy. Nothing I can do about this. I'm just trying to create a resilient and scalable LAN for the site

I'd like to avoid using a FHRP protocol if I can. Just single SVIs configured on the 4500Es will suffice for me, using L2 port-channels from the aggregation layer to the collapsed distribution/core layer. There is a significant core in each of the datacentres.

As I mentioned, the thing it has to be is easy to administer and scale. I'm pretty happy that this design satisfies both of these requirements.

Shillings,

I don't have the linecards as yet, however the Sup-7e's were only purchased last year. It would be hard to get this over the line I think. Our company also doesn't like to be on the 'bleeding' edge for new technology. This will be something for future deployments. Thanks for the heads up though.

You're welcome.

The new 3850 series is the same price as the 3750-X series and has much higher stack ring performance. Downside is that you can't stack them with any existing 3750s and Cisco don't   currently offer an all-SFP model, which I guess you might well have in mind.

Always worth checking out the price difference to the 4500-X fixed chassis switch. The 16 port model might not be too much more and you can always add the 8-port expansion model later. VSS capable too and I seem to recall VRF-Lite to supported in the standard IP Base IOS, unlike many cheaper switches.

Can't edit on this phone and forgot to add that all ports on 4500-X are 1 / 10GbE capable, depending upon SFP. That's non-blocking 10GbE across the max 40 ports. Also has larger buffers than 3750-X, better suited to 10-to-1 gig contention.

Some good information there, thanks again.

I like the look of the 4500X, but the only way to stack them is using VSS? correct (not really stacking but you know what I mean ).

My only concern with this is the added complexity to the network, albeit it very little, I'd be handing this over to our Ops guys who have little to no experience in using VSS.

Have you done much of this with the 4500X series? I'm just going over the VSS design guides now and Campus design guide.

Sorry for the delay in coming back to you. Yes, you must use VSS. The 4500-X doesn't support stacking. However, there are advantages to VSS over stacking. For example, VSS supports In-Service Software Upgrades (ISSU) and Non-Stop Failover/Stateful Switchover (NSF/SSO.)

VSS is pretty straight forward to configure and support, but no getting away from the fact that it would still be new to your support team. VSS has been around for a number of years though, so there's lots of experience within TAC, Cisco forum and rest of the Internet.

You'd only need a couple of 1m long 10GbE Direct Attach Cables (DACs) between a pair 4500-X switches, for VSS. These are much cheaper than purchasing x4 fibre modules. You probably already know this last bit, but worth a quick mention.

No doubt you've already found the 4500-X data sheet, but this white paper might interest you too:

http://www.cisco.com/en/US/prod/collateral/switches/ps10902/ps12332/white_paper_c11-696802.html

Hi Shillings...

No worries for the delay, not as if I am paying for your time I do however appreciate you replies, all very useful information and food for thought.

I was looking at the 16 port model of the 4500X. I am assuming, that for redundancy, I would connect a pair of 4500X's using two of the 16 ports, leaving me with 14 available ports. I'd also lose a port for the uplink to the collapsed core. Leaving a total of 13 ports per switch. Does this sound correct? The reason I ask is that in some of the buildings there could be over 13 access switches. Just considering whether a model with a higher quantity of ports would be better?

Also, if I have access switches that aren't stacked, and I connect them, redundantly to each of the VSS'd 4500X switches in the aggregation layer in a building, would the 4500X VSS stack simply block one of the ports if RPVST is enabled? Surely it would cause a loop otherwise?

Thanks

Just to let you know we use VSS (on 6509E chassis) in our core and it works really well. The Multi-chassis etherchannel is really good. We have 3750-X series stacked as distribution with 3560-X access switches (both with redundant power supplies). Everything is dual connect to the uplinks.

We will be moving to the 3850 switches in new buildings for the distribution layer.

Multiple WiSM2 modules for Wi-Fi in each 6509E chassis.

That's good to hear. I'm warming towards this (VSS) as our solution

May I ask why you plan to use the 3850's and not the 4500X's for your distribution layer in the new building? I am just curious as I was looking at the 3750X's but started to look into the 4500X's the last couple of days and think this may be the way forward for us (I haven't looked at the 3850's hence me asking you ).

Thanks

It revolves mainly around cost and features needed. We also looked at the 4500X's and they were a serious contender but the cost of them versus the 3850's was a no brainer. Now, if we were looking at multiple 10G uplinks going in to the distribution then obviously the 4500Xs would be a better option.

Currently we have 4Gbps uplinks to the 3850 and 40Gbps to the core (only supported on the 48 port model of the 3850). This is adequate for our needs for the foreseeable future.

The 4500X is a nice box if you have the cash!

I haven't looked at the 3850's

Do you have copper or fibre uplinks to the access layer?

If copper, then 3850 is preferable to the 3750-X series, based on stack ring speed.

If fibre, then I presume 3850 is no-go, because Cisco doesn't offer an all-SFP model, at present.

codflanglers point about needing the 48-port 3850 for 4 x 10GbE uplinks is a good one. Requiring a 48-port model for better 10GbE capacity will narrow the price gap to the 4500-X series. Note that the 3750-X only supports 2 x 10GbE, no matter which model you go for.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card