cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3141
Views
77
Helpful
32
Replies

Need an Eagle's Eye View - LAN Design

fahim
Level 1
Level 1

We have been provided with a bit complex proposal by our 'independent' system integrators.

I call it complex because I see that they have incorporated a distribution layer, a core layer and another server farm switches into the design of our Local LAN which encompasses a 17 floor building with about 250 nodes / floor.

I do understand that this is a generic Cisco SME/Enterprise design which has been put forward by these guys but our requirements are simple.

As mentioned above, it's going to be a single building with 17 floors and each floor having about 250 normal windows XP Pro users, I don't foresee the need for more than a gigabit uplink from each IDF to core leave aside a distribution set of switches.

On the contrary, these guys are proposing 10G uplink from all.

Is 10G / Distribution layer a norm these days?

Disregarding the need for VoIP and Wireless (we might not be considering it at this stage), please take another quick look at the diagram and advise on any anamolies you might see.

Least oversubscription and security for server farms is one of my concerns.

Any/all comments would be much appreciated which might help us in getting a more economical but equally efficient solution.

32 Replies 32

Like the guy above at first glance I like this design. It's very "leading" Edge etc buecause most customers don't do 10gig through out the network. With that 10Gig has been out for 4-5 years now and your building a new network so I say go with it.

Before I say anything there are multiple ways to skin the cat. It comes down to the Network Managers/Engineer and their desires. And it it comes down to budget.

Access-Layer ==> There is no right or wrong, 3750, 4507's, heck for customers that have the money I've seen 6513's in the IDF/Closets. What do you want?

Distro/Core ==> Collapse/don't collapse it's a budget thing. Can you affor 4 boxes or only 2?

The answer isn't a Layer 2/3 issues it's Layer 8 or 9. (politics / budget) :). Cisco recommends the ol Hierarchical Design Model and now the new Enterprise Composite Network Model (for more then Campus and bigger).

Serverfarm ==> Go with the 6513's

Core ==> Go Layer w/ Layer 3 /30 point to point links. Anyone that doesn't is missing the boat these days. The trend is to get ride of L2 win the Distro/Core and now even the Access Layer. Why bother with the hassle of trying to make sure your Layer2 settings match you L3 these days. STP,RSTP,Uplink,Backbone fast minimize that stuff.

Routing Protocal ==> Eigrp has an easier learning curve. OSPF is your sticking in someone elses equipment so that you can work and play well with each other. But why would you do that with when you've got a chance to have one solid new network that can be supported end to end with one TAC call. Avoid all of the point the finger stuff and stick with one vendor.

Firewalls ==> it's a speciality go with what your inhouse security team is trained it and can support so that you won't feel the pain when something isn't setup correctly. Pix and now ASA's are easy to use. Don't do Sonicwall and that other stuff (that's a save money approach) The other day I was with a customer and they have the all-in-one Red firewall (sorry I can't remember the name) There was an issue and I had to call INDIA for support and got the run around. The facts are the two best firewalls that most people buy are Checkpoint and ASA.

PS. move the links from the Enterprise Edge links to your Campus Backbone/Core. My guess is the guy made a mistake. That diagram is very well done the person who made it knows what they are doing. 10gig Uplinks if you can afford it go with it. If not back them down to (two) 1-Gig links and eitherchannel from there. you can get up to 8 if you need the extra b/w.

At the end of the day we all follow guidlines and use Tools. Cisco offers to sell us everything that what makes them great. End-To-End they are a one stop shop with great support. (Not to say that their Tier1 support is the fastest and they they can solve 100% of your probs with out going to level 2) but...

Good luck

Thanks for that. Can you recommed the best Resource / Book for this kind of Design which would cover end - end.

I have been wanting to master this for sometime now.

Sorry, this isn't a book recommendation, but the whitepapers and design guides available on the Cisco are also worth the read. For example, you might want to look as some of the guide here: http://www.cisco.com/en/US/netsol/ns656/networking_solutions_program_home.html

pkaretnikov
Level 1
Level 1

My 2 cents:

I agree with the collapsing of the CORE and Distribution layers but I would recommend that you stick with 4507Rs at the edge. The redundant power supplies are a lifesaver. I know it's a rarity that they fail, but switching out an UPS or a power cable getting pulled out accidentally is a huge problem for end users in the middle of the day. You don't need the redundant Sups but having the capability in the future is always good. Floors with key personnel and the big bosses ought to have that extra level of service.

Recently I had my network "Upgraded"(read as downgraded) and 6509s were switched to stacks of 3750s and its been nothing but a pain. It hasn't been a performance hit but the redundant power is sorely missed. Providing all of the closets with enough RPSs would be too expensive so now when an UPS needs to be replaced we have to lug around RPS 675s...

fahim
Level 1
Level 1

The consultant has returned with a new diagram.

Now with all these inputs I went back to our consultants and appraised them of our needs. I do know some of you have been in favour of 4507s but I thought of trying out the stacked 3560G for easy manageability. I will ask the guys if it comes with redundant power supplies.

So now, they have come up with another updated design.

The guys still insist that they cannot do without Distribution and a Core. Last time our general consensus was that we are being oversold things we don't literally need, for example a pair of C3560G on two sides of the previous design.

I also can't seem to make much sense out of the Server Farm switch, C6513. If I include the FWSM and IDSM modules, would I not get a performance degradation despite my 10G connectivity of C6513 with C6509s?

They have downgraded ASA from 5540 to 5520 and maintain that it will have no effect.

Enterprise edge routers have also been downgraded from C3825 to 2821. Wonder what effects this would have?

Any other comments on this design please?

Now my main worry has shifted from design aspect to that of the prospect of being 'oversold' / 'undersold', in looking at complete redundancy and security.

Pls advise, on where all can I make a safe cutdown? Thanks to you guys for all the inputs. I am so glad that I came here for expert comments.

3560G or 3750G? The former doesn't stack.

Neither comes with redundant power supplies, although a separate RPS unit can be used. If used, one RPS usually covers up to 6 devices; for single unit power module failure. Coverage often will not support multiple unit failures, such as failure of power supply source.

Seeing how you have gig for user ports, and if you stack, were you going for 10 gig uplinks or Etherchannel gig? Depending on number of links in Etherchannel, the cost of the interface modules might make the 10 gig less expensive.

If you desire 10 gig, you'll need the 3750G-16TD, which goes EOS 10/15/7 or the 3750-E. (I would recommend the later for the uplink switches.)

"The guys still insist that they cannot do without Distribution and a Core."

I see 6 10 gig links connected to each core. So you could ask "the guys" how the core 6500s provide benefit vs. just connecting the "distribution" 6500s to the server 6500s. Port count wouldn't change on either of those.

If the "core" 6500, indeed, only has 6 10 gig links, and is only routing between those links, why you could even replace the pair with a stack of 4 3750-Es. This provides only 4 10 gig connections; the stack eliminates the need for the dual 10 gig cross connects.

I too might be concerned about the performance impact of the FWSM (5 Gbps; up to 4 per chassis [20 Gbps]) and especially the IDSM-2 (500 Mbps; up to 8 per chassis [4 Gbps]) within the server 6513s. Much would depend on what traffic your going to run through them.

With regard to the ASA and router downgrades, much depends on what load they will be subject to. What's the expected traffic load for each?

"Any other comments on this design please?"

Conventional design, but still believe dual 6509s using sup720s and DFC line cards with 10 gig ports would work as a collapsed backbone. Use edge switches to feed to/from the 6509s' 720 Gbps fabric and the chassis 400 Mpps.

PS:

Consider, in your diagram of the proposed design, the server side 6513s carry all traffic except client to client. So assuming there is not a much of that, and if they can carry that load too, dual 6500s should be able to carry your whole infrastructure.

fahim
Level 1
Level 1

After months of negotiations, we are down to two options which I am finally seeking community recommendations on here at NetPro.

To start with, thank you all for your commments till now which have been very insightful.

Attached are the two option diagrams provided by our SIs, Option I and II.

* Option-I, gives me facility to use C3750E which starts with 1G uplink to core but is upgradeable to 10G in the future.

Core, distribution and server farm collapsed into two 6509s.

Dual C3750 at the edge providing connectivity with two ASA 5540 although I am yet to understand how they intend to multihome firewalls as they have depicted in the diagram of Option I & II.

* Option II, C 4507R at the access layer, with distribution and core separate (two 6509 each). There is a K9 Firewall Module on the two core switches in their BOQ.

The two ASA5540s are again connected the same way as before, multihomed.

Based upon the suggestions in this thread, the intention is to go for Option I, collapsed backbone, C3750E with dual 10G uplinks to core.

Your suggestions about the flaws, merits, drawbacks of Option I vis a vis II, are sought before we sign on the dotted line.

Regards.

"* Option-I, gives me facility to use C3750E which starts with 1G uplink to core but is upgradeable to 10G . . ."

"Based upon the suggestions in this thread, the intention is to go for Option I, collapsed backbone, C3750E with dual 10G uplinks to core."

Your may run out of ports moving to 10 gig unless you also place 3750-Es facing the servers.

On the user facing side, you may want to consider using 3750Gs for non-uplink stack members.

If your budget was able to support all the 4500s and multiple 6500s but use 3750s instead, you might want to consider going with 10 gig now for the uplinks. You might also want to consider using dual sups for NSF.

Thanks for the reply Joseph. If I go for 4507R vis a vis 4506, I get the same number of line cards(5) but Single SUP support. Which one is better?

Also, 4507R & also 4506 models seems to support: [Supervisor Engine II-Plus, II-Plus-10GE , IV, V, V-10GE.]

What should be my basis to choose SUP engine? The vendor in his BOQ has mentioned "Catalyst CAT 4500 Sup II + 10GE".

Is this OK!!??

"If I go for 4507R vis a vis 4506, I get the same number of line cards(5) but Single SUP support. Which one is better?"

The "R" chassis is "better" if you want to provide a redundant sup. Without it, if the sup fails, the chassis fails. Unless your user edge is critical, single sup chassis might be fine especially if you keep one spare on hand to cover all the 4506s.

The "Catalyst CAT 4500 Sup II + 10GE" is primarily a L2 switch with some L3 capabilities, see: http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd80356bde.html. The Sup V-10GE is a full L3 switch, see: http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd801c5c66.html and is a bit faster and has additional hardware features too. The Sup II should be fine if you just plan on doing L2 at the user edge. However, unlike the 3750s, you need to replace the sup to get full L3, software upgrade won't do. For the user edge, I would expect almost all traffic to be to/from the uplinks, so full L3 isn't likely to be required.

Joseph..your replies had been most helpful and that's one of the reasons I keep rating and keep posting again and again.

One last query for the week probably :)

On the server edge, I do not want my servers to come and connect directly into my core for purposes of manageability and fault finding primarily. I am looking at about 100 servers max over a period of 5 yrs including high capacity bandwidth hoggers File and DB servers. I am contemplating on what to choose for my server farm aggregation switch model.

I have three options there:

1. I connect my servers to a 4507R , redundant SUP V- 10GE full L3 capabilties, with an uplink of 10GB (redundant) to core

2. I connect my servers to stackable 3750E with 10G uplinks to core.

3. I connect my servers to catalyst 4948-10GE with Enhanced ES image loaded and 10GB uplinks to core.

Which one of the above is the best option for me/ what should be my criteria? Oversubscription caveats, costs? How do I cater for port redundancy?

I think 3750E would be the cheapest with 4507E costliest.

For the server edge, and if we're dealing with gig edge ports, the 6 Gbps per line card fabric connection concerns me for high density port cards (e.g. 48 gig ports) on the 4500 series.

The 4948-10GE has much going for it but could force you to use lots of 10 gig ports, either to your core 6509s or even daisy chained among a 4948 stack.

The 3750-E is the most interesting option because of it high speed stack option. For maximum performance, you could have a single unit uplink to the two core 6509s (similar to using 4948). Or, you could build a stack of 3 units, and you performance similar to a 6505 (no such chassis) with duel sups. Or, you could build a stack of 9 units and are somewhat like a 6513 with dual sups. Additionally, one size doesn't have to fit all. You can multiple different size stacks.

Another interesting option with 3750-E stacks, assuming you only start with dual uplinks for the stacks, the additional 10 gig ports can be used either a channel bundles to increase bandwidth to/from the stack and/or as 10 gig ports to selected hosts.

With regard to port redundancy, you can connect servers to different 3750-E either within the same stack or different stacks. The former will be a little less expensive.

Also with the 3750-E, you could start with just the basic image until you see a need for full L3 at the edge. If you do move to L3, again, you don't have to do it (and license it) for all edge stacks if unneeded.

The biggest caution with the 3750-E is it's a very new device and doesn't have the track record of the 4500 or 6500 series. However, Cisco, I think, is one of the best vendors in working to make it right.

"For the server edge, and if we're dealing with gig edge ports, the 6 Gbps per line card fabric connection concerns me for high density port cards (e.g. 48 gig ports) on the 4500 series."

Joseph, this is interesting. I didn't know line card throughput also mattered. I knew that by using SupII+ 10g i got 108gbps fabric connectivity and by using SupV 10g on 4507R chassis, I got 136 Gbps.

Now, if I only get 6gbps per line card that means a max of 30gbps off 5 line cards of 48Gbit ports/line card.

What advantage does using SUPII+ or SUP V gives me in terms of switch fabric throughput?

Also, I searched hard through datasheets pertaining to 4500 series but couldn't fig my teeth into this particular info of so much relevance, i.e. 6gbps per line card on 4507R. Where can i find this?

See table 1 in http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd802109ea.html, and look at "wire rate" column. For one example see the WS-X4548-GB-RJ45 it notes "8-to-1". Or for another example see "Bandwidth is allocated across six 8-port groups, providing 1 Gbps per port group" within the description of "Figure 24. WS-X4548-GB-RJ45".

See table 2 within http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd801792b1.html. Note both the 4506/4507R with either the Sup II-Plus-10GE or Sup V-10GE are listed as 108 Gbps, 81 mpps. For bandwidth, take the 30 gig (6 Gbps x 5 slots) plus dual 10 gig uplinks gives 50 gig. For full duplex double it for bandwidth, and you need 100 Gbps.

"What advantage does using SUPII+ or SUP V gives me in terms of switch fabric throughput?"

Either will provide wire rate performance for supported chassis within chassis limitations.

PS:

6500 series using sup 720 provides either 20 or 40 Gbps per slot depending on the chassis. So for instance, 48 gig port cards can't deliver full rate to/from card.

3750-E Stackwise+ is listed at 64 Gbps but it's really 32 Gbps (full duplex); compare against 4500 6 Gbps or 6500 20/40 Gbps.

PPS:

3570 Stackwise is listed at 32 Gbps but it's really 16 Gbps (full duplex).

In light of this new bit of info, I might need to reconsider our decision to go with stacking 5 x 3750Es within each IDF as compared to 4506.

You mentioned that 3750E-48PD Stackwise+ is listed at 64 Gbps but it's really 32 Gbps (full duplex) despite having 128-Gbps switching fabric; when we compare against 4506 with SUP II + 10GE, I get 108 Gbps but according your analysis, I am only effectively getting 6 Gbps connectivity per line card (X4548-GB-RJ45; Five of 'em) to fabric.

Can I safely deduce that without considering uplinks to core, which will be same in either case, I am about 2Gbps better (throughput wise - 30Gbps vs 32 Gbps) with stacked 3750E vis a vis 4506/7R with 48port, 1gig Line cards in five slots?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card