cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3067
Views
77
Helpful
32
Replies

Need an Eagle's Eye View - LAN Design

fahim
Level 1
Level 1

We have been provided with a bit complex proposal by our 'independent' system integrators.

I call it complex because I see that they have incorporated a distribution layer, a core layer and another server farm switches into the design of our Local LAN which encompasses a 17 floor building with about 250 nodes / floor.

I do understand that this is a generic Cisco SME/Enterprise design which has been put forward by these guys but our requirements are simple.

As mentioned above, it's going to be a single building with 17 floors and each floor having about 250 normal windows XP Pro users, I don't foresee the need for more than a gigabit uplink from each IDF to core leave aside a distribution set of switches.

On the contrary, these guys are proposing 10G uplink from all.

Is 10G / Distribution layer a norm these days?

Disregarding the need for VoIP and Wireless (we might not be considering it at this stage), please take another quick look at the diagram and advise on any anamolies you might see.

Least oversubscription and security for server farms is one of my concerns.

Any/all comments would be much appreciated which might help us in getting a more economical but equally efficient solution.

32 Replies 32

Pavel Bykov
Level 5
Level 5

Ok, we have somewhat similar situation as you.

But beware! it all depends on applications your users use. Some graphic companies run 10 gig to the user.

My company is large insurance enterprise. With "normal" apps.

So here are my points from experience:

1. 2x10gig per 250 users is quite an overkill. We did by SUP5 for our 4506. But it's more of an outlook for the future, as we are using 2x1Gbps uplinks and it covers the needs very well so far (with a mix of 10/100 and 10/100/1000 linecards for user access, all with inline power)

2. 4507R is a redundant chassis. Are you a banking business? That's a big saving point to have only one SUP. We do have a mix of 4506 and 4507, but they all run one SUP (Extra slot on 4507 is usable only for SUP, so it's wasted). All of the problems we ever had with 4506/4507 wouldn't be saved by a second SUP.

3. C6509 in Distribution could be reduced to C6506. You don't need redundant sups that much, since you are running two boxes at the same time.

4. Core seems to have too many module slots too. What are the linecards? If there are only so few connections...

5. I would suggest running Layer 3 up to the Access Layer.

6. I would suggest L3 CORE in this setup.

7. NOTE: Your Edge Distribution connects to Campus Distribution instead of CORE like Cisco suggests.

8. Internet link is WAAY to slow. We have 160 Mbps for 10 000 users, with 80 normally available and it's loaded - we need to use QoS on internet links. So 8kbps per user is my minimum suggestion. In your case - at least 30 Mbps (but it depends how your users use Internet - Our internet proxies allow users only to browse. no internet email, no ftp and so on)

9. I don't like how SFARM is done, but my way would be more expensive, although more manageable and less cables would be required (2x stackwised 3750E in each rack, connecting with uplinks to 48x1Gig linecard on each 6506)

Their design is good, but there will be A LOT of cables. Unless you use good cable management system, there will be outages because someone disconnected wrong cable, someone stepped on the cable, someone broke the cable, someone fell into cables and strangled themselves... i've seen too much of this to go into it again without really good working cable management solution)

10. Good luck

Hope this helps.

Pavel Bykov
Level 5
Level 5

Also, I'm not sure of price difference between 3750 and 3560G. If it's not a lot, i would suggest putting stackwised 3750 in Edge Distribution instead of 3560G

And it seems to me, that you could be OK without "CORE" at all. Just connect DISTR. to S-FARM.

That's what we have, and it works really well.

Thanks for the detailed reply Slidersv. Took me some time to cross reference every issue that you mentioned and understand it from the 'need to have' basis within our environment and you are right on most counts.

Need a bit more detail on the below mentioned subject, I am also attaching the provided BOM to us.

I would address and try to understand the issues you highlighted in each of your reply:

1. Yes. I don't see the need for two SUP Vs. I guess we can live with a 4506 here with a single SUP V uplink currently to 1 Gig. Single SUP V would give us a capability of a 10Gig later...correct?

The issue that's still pertinant is that of PoE (inline power). Do you think I should make it a norm on all Inline cards of 4506? The cost difference is quite a bit and we don't yet use IP Telephony.

2. You mentioned about some problems you've had with 4506/07. Anything that I should be cautious about at this stage of designing?

Yes..again, I can live with one SUP V on each.

3. I see that 6506 is eos and has been advised to be replaced with 6506E. So is 6509. In the Bill of Qtys (BOQ) though (attached file) the guys are providing is with 6509E and that's a solace ;)

But, correct me if I am wrong, the only difference I see between 6509E and 6506E is that of 9 and 6 slots on the latter. Is there any other feature missing in 6506?

I noticed that in BOQ, the guys have mentioned only 'two' WS-SUP720-3B' for a pair of 6509 on core and Distribution. That means they are probably not supplying a redundant SUP here on any of the 6509s. But then, why so many slots? Beats me too!!

4. Connections to core, according to diagram would be only 8, right!!? Two from the each of the four neighboring switches of Distribution and server farm.

In the BOM, the guys have incorporated this on each of their Core 6509E :

a. One 8 port 10GB Ethernet Module

b. Ten, 10Gbase - LR X2 Modules

Do you see any need based upon this BOQ?

5. You suggest that we incorporate L3 upto access layer. Please elaborate on this aspect. Do you mean that 6509 doesn't support L3 functionality on it's base IOS?

6. Same on this point too. I didn't quite get. Would appreciate if you can tell me the means of going about achieving this.

7. This is Cisco suggestion to connect my Edge to Distribution and then to Core within the same building. I am still a bit confused as to the nee to bring in Distribution into picture? Why does someone use Distribution?

(*I know it's going to be another long reply from your side but please do hold on to your keyboard one more time*)

8. Another point we need to consider soon. Our Internet speed is indeed slow but then we are still about 2000 user base with not all given to browsing and none Internet mail.

9. For Server farm, I am beginning to look beyond Cisco. How about a third party Unified Threat management system. Something of the likes, Fortinet, Sonicwall, Cyberoam etc.

The servers would then live behind a UTM, connected to stacked switches(least oversubscription) on each rack. All users have this UTM IP as their gateway. This way, I have servers protected again by an Antivirus, IPS solution.

Have you come across such an arrangement? Is this workable? You suggested that I use 3750E on each rack, are there any oversubscription caveats here?

Lastly, you suggested using 3750 instead of 3560G, why was that?

Thanks a lot for your earlier reply.

please see my reply below

jwdoherty
Level 1
Level 1

Not clear what advantage the 3560Gs provide. Why not connect directly to the 6500s? If there are other devices to connect, for a L2 fan out, a stack of 3750Gs would be better. Also, even if you need the separate fan outs, can the physical topology just have one 3750G stack for both the PSTN and Internet connections?

Instead of 4507s use stacks of 3570s, 3570Gs or both depending on whether you want to provide any gig to the desktop. Gig uplinks, initially, will probably be fine. If more bandwidth needed, Etherchannel or add 3750-E to stack and use 10 gig. (Note you can do L3 on 3750s with a software upgraded - not tied to specific sups as with 4500 series.)

For server side, instead of 6513s, use stacks of 3750-Es with 10 gig uplinks (assuming servers will have gig). You could either just have dual stacks to directly replace the shown 6513s or one stack per server row.

Collapse the distribution and core 6509s into a single pair (which would be the only pair of 6500s). Insure line cards have DFCs and use sup720.

Lastly, used the least expensive medium or module that supports the bandwidth required. E.g. copper 10 gig for cross connects between 6509s.

Joseph, For some reason I haven't seen Firewall's 'Inside' interfaces going directly to core. Your post got me thinking why that isn't done usually though.

Although, your point about using same stack of 3750Es for PSTn and Internet connections is fair.

2. I guess what you meant as a replacement of 4507s was 3750s (probably a typo there '3570G'), for a I failed to find this model on cisco or maybe I didn't look hard enough.

My query is, using stacked version of 3750E vis a vis chassis of 4507R, what all advantages do I get apart from obvious price factor?

3. On edge, I get L3 on 3750, how can I utilise it to further increase my network performance?

4. If I use stacks of 3750Es with 10G uplinks, what's my oversubscription? Do I get jumbo packets supported for future NAS/SAN coming into the server farm?

5. I am in complete agreement with your idea of collapsing Distribution and Core layer to one. I am only wondering, why and where does someone use demarcated core and distribution?

What's are the advantages of such a design that this guy is providing us with?

Your point about using DFC on line cards is real good one. I have attached BOQ in my earlier reply. Pls take a look. I guess their line cards are supporting DFC and they have Sup 720 on 6509s.

Thanks again. Awaiting your insightful reply.

#2 The 3750G is one of the 3750 models; those with 10/100/1000 ports.

3750E vs. 4507R (BTW I suggested 3750 or 3750G for user edge, 3750-E for server edge) - advantages: performance, more ports, redundancy w/o need for 2nd sup (frankly although the 4500 series is solid, it's getting a bit dated, especially with the chassis fabric's performance)

#3 yes you can do L3 on 3750s, and I'm not recommending you do or don't, but with regard to question on performance, the major benefit would be to route within the stack between subnets without having to transverse uplinks/downlinks. Usually not needed on the user edge, more of a possible benefit on the server edge.

#4 Your oversubscription is up to you. With a single 24 port 3750-E, best case 2.4:1. You can maintain similar ratio from a stack if you use other stack member 10 gig ports. If you use 48 port models, then the ratio becomes 4.8:1

The 3750-E supports jumbo frames (9216 bytes).

#5 The primary advantage of the classic 3 layer hierarchal design is it allows scalability. This is still true but what often seems to be overlooked is how much the performance of current gen equipment has improved. Ask those who proposed the designed to explain why they want it. Hopefully the answer will not be that's the way we always design for this size.

A nice Cisco paper on gig design can be found here: http://www.cisco.com/en/US/netsol/ns340/ns394/ns74/ns149/networking_solutions_white_paper09186a00800a3e16.shtml

What I'm suggesting is "Collapsed Backbone?Small Campus Design" might be suitable for your existing building using current gen equipment.

Yes, I do see DFCs on the BOQ. They would be for the 6748s since the 6708s have them built in.

I was surprised to only see LR modules for 10 gig. As I noted in my prior post, using the minimal modules for the distance can save quite a bit of $.

Also if your concerned about performance and oversubscriptions, the proposed 6513s do not offer 40 gig to most of the slots and all the 8 port 10 gig cards are oversubscribed about 2:1.

Two other thoughts came to mind.

First, do you really need/want dual 6509s or would a redundant 6509 be better? There are pros and cons for both.

Second, if you go with a collapsed backbone (either single or dual), normally only other fan out network devices would connect to it/them. However, you can make special exceptions for very high volume "user" devices. E.g. SAN or perhaps very busy servers such as corporate mail server(s).

Something caught my eye just recently, the two ASA 5540's are shown as dual homed in the design? Is this some kind of a new capability in ASAs?

Now, do i really need dual 6509s or redundant, that's a tough one. I thought it was a redundant design and not dual. On the contrary, I was planning to distribute the VLAN load on both 6509's using RSTP and that would also cater to redundancy.

Your thoughts?

Btw, one of my friend reveals that they use Nortel and Nortel has a proprietary protocol that shares the load across the two cores and seemlessly failsover in case one of them goes down. Does Cisco have something like that?

1. Correct. As for PoE, if you are planning to use IP telephony (in the future, not now) go for POE, since cards are expensive (as you mentioned) and it would be a waste to replace so many linecards. It all depends on your future plans. Just like SUP V is a good idea.

2. Well, the early models of SUP V had bad memory, that made them lock up. We had one 4506 backplane failure (second SUP wouldn't save from this). We also had one 4507 that set itself on fire (!!!), luckily fire protection turned off electricity in time (the connector to SUP started to short circuit)

3. You shouldn't go for so many slots if you aren't planning to use them. There has to be a purpose or a future plan to use them.

4. Right. I think you can go for design without CORE at all.

5. There is L3 functionality, but Layer 2 creates an additional point of failure, does not easily support load balancing (Blocking state of STP), and is slow to converge sometimes. Also creating local L3 domains reduces broadcast domains and isolates failures (we had big L2 failures, where our whole building network collapsed because of one bad PC NIC card, or where it collapsed because of L2 information processing overload - too many trunks). It's just a good idea to route VLANs (create SVI) right on 4507

6. Create point to point VLANs, with mask like 255.255.255.252 and enable routing protocols. You will then have redundancy, fast convergence, manageability, failure isolation, and load balancing. It's all CEF, so it's not going to be slower than L2 in any case.

7. Distribution layer is a design guideline. It's done for manageability and control.

8. 2000 users with internet connection should have around 16Mbps

9. No experience with that. sorry.

3750 has gigabit ports. It has similar design as 3560, but has STACKWISE. That is a very good technology for resiliency, redundancy, manageability, and so on. You see all stacked switches as one switch. And convergence is extremely fast.

hope this helps.

Please use rating system to rate helpful posts.

Hi Pavlo

Once again, thanks for the reply and I have rated it duly.

Seek your advise on three points:

1. Say, on tenth floor I just create single subnet vlan, 10.10.10.* and my 4507 has an SVI of 10.10.10.1/24 defined on this. On ninth I have an SVI of 10.10.9.1/24.

My udnerstanding is that L3 routing for all traffic would be defined on the Core.

According to your Point No. 5, how would I go about alternatively, if I do L3 on my edge?

2. Please elaborate on your Point No.6 in a bit more detail with an example, like the one of ninth and tenth floor above. I am a bit novice on this issue of P2P VLANs.

Rgds

If each uplink path is defined as a routed link (the purpose of the point-2-point vlan on that link), then the routing protocol has two paths to utilize and, depending on the routing protocol, will use both. This is similar to traffic flow between the distribution and core 6500s in your original diagram.

(For the edge, easy usage of both links might also be accomplished with some of the newer L2 technology, e.g. GLBP.)

Some other advantages of edge routing:

o Routed links are easier to traffic engineer. E.g. if you want to dedicate one link for certain traffic.

o If you see much traffic flowing between the 9 and 10 floors, and they are separate subnets, you could interconnect them with an additional routed link or links. Most routing would take the shorter path, bypassing the upstream router. I.e. one less physical hop.

o If you place multiple subnets on the same floor, routing on the edge can pass traffic between them without the need, again, to transit the upstream router.

The L2 issues that Pavlo mentions are very real. However, they usually arise when you extend VLANs across the network, e.g. the same subnet on floors 9 and 10. If you restrict VLANs to just one edge device, and route everywhere else, and don't have many traffic flows between same side edge devices, edge routing isn't as critical, and the choice between them becomes difficult.

I would agree that edge routing may better position your network for the future, but since your initial post did ask about economy, do keep in mind edge routing usually comes at an additional cost, either for hardware, software or both. However, if you chose not to do edge routing, I suggest selecting equipment that can easily be upgraded to do so.

(Pavlo, hope you don't take offense at my also answering Fahim's questions. Did want to inject the cost consideration.)

I'm not familiar with the ASA 5540s, so unable to comment on whether they can be dual homed.

Distributing the load across multiple 6509s is often dependent on how you configure your topology, for instance whether it's L2 or L3. Keep in mind often the bottleneck in performance isn't the 6509 itself, but the uplink bandwidth not being fully utilized across multiple paths. Cisco GLBP can be attractive for that at L2.

With regard to your last question, I'm unaware of any publicly announced feature that does anything like you describe.

There's alot to like about this network design, but if your requirements are "simple" then this is kinda overkill.

Access Layer:

- If you will have approx 250 nodes per floor, then the 4507R's will not scale enough for your needs. 4507R's only have a max of 240 copper ports. If your requirements trully are simple, then redundant Supervisors and 10 gig to the distros is too much. I'd go with Catalyst 3750-48PS or 3750G-48PS. This assumes you want PoE for VoIP phones & Wireless in the future. Catalyst 3750 can scale up to 432 when stacked.

Distro Layer:

- I agree with Slidersv, 6509E's on the distro is too much, go with the 6506E's, they are cheaper and will have more than anough capacity for your current and future needs, specially if you get the WS-X6748-SFP line modules, which is a 48 port SPF blade and are fabric enabled. Also go gigabit from the building distros to the core, but bundle a couple of gig ports in Etherchannel bundle for greater bandwidth.

- I too also think it's awkward to have the Enterprise Edge module hanging off the building distros. Ask your integrator why they do it this way and report back to us, i'd like to know. I've worked for some very large networks and the provider edge and other modules have always hung off the core, not the building distros.

Core Layer:

- Core looks fine, maybe that too can be a 6506E instead of a 6509.

Server Farm:

- I like the server farm module, i would not change anything. Keep the 10 gig between the server farm distros and the core, this will future proof your investment.

Enterprise Edge:

- Why do they have 2 3560G's right before the ASA firewall? They can connect the ASA devices right to the distros/core.

- Agreed with others also, your internet links are too slow.

Looking at the BOM, they have redundant supervisors on all 6500's. I personally think that sinse will have dual 6500's on the building distro, core, and SF distros, you don't need dual supervisors in each chassis, one will do. Also, if this network is going to be all Cisco, why go with OSPF as your routing protocol? My preference would be EIGRP.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: