Need an Eagle's Eye View - LAN Design

Unanswered Question

We have been provided with a bit complex proposal by our 'independent' system integrators.

I call it complex because I see that they have incorporated a distribution layer, a core layer and another server farm switches into the design of our Local LAN which encompasses a 17 floor building with about 250 nodes / floor.


I do understand that this is a generic Cisco SME/Enterprise design which has been put forward by these guys but our requirements are simple.


As mentioned above, it's going to be a single building with 17 floors and each floor having about 250 normal windows XP Pro users, I don't foresee the need for more than a gigabit uplink from each IDF to core leave aside a distribution set of switches.


On the contrary, these guys are proposing 10G uplink from all.

Is 10G / Distribution layer a norm these days?

Disregarding the need for VoIP and Wireless (we might not be considering it at this stage), please take another quick look at the diagram and advise on any anamolies you might see.


Least oversubscription and security for server farms is one of my concerns.


Any/all comments would be much appreciated which might help us in getting a more economical but equally efficient solution.



  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 4.8 (16 ratings)
Loading.
Pavel Bykov Fri, 08/10/2007 - 07:09
User Badges:
  • Silver, 250 points or more

Ok, we have somewhat similar situation as you.

But beware! it all depends on applications your users use. Some graphic companies run 10 gig to the user.

My company is large insurance enterprise. With "normal" apps.


So here are my points from experience:

1. 2x10gig per 250 users is quite an overkill. We did by SUP5 for our 4506. But it's more of an outlook for the future, as we are using 2x1Gbps uplinks and it covers the needs very well so far (with a mix of 10/100 and 10/100/1000 linecards for user access, all with inline power)


2. 4507R is a redundant chassis. Are you a banking business? That's a big saving point to have only one SUP. We do have a mix of 4506 and 4507, but they all run one SUP (Extra slot on 4507 is usable only for SUP, so it's wasted). All of the problems we ever had with 4506/4507 wouldn't be saved by a second SUP.


3. C6509 in Distribution could be reduced to C6506. You don't need redundant sups that much, since you are running two boxes at the same time.


4. Core seems to have too many module slots too. What are the linecards? If there are only so few connections...


5. I would suggest running Layer 3 up to the Access Layer.


6. I would suggest L3 CORE in this setup.


7. NOTE: Your Edge Distribution connects to Campus Distribution instead of CORE like Cisco suggests.


8. Internet link is WAAY to slow. We have 160 Mbps for 10 000 users, with 80 normally available and it's loaded - we need to use QoS on internet links. So 8kbps per user is my minimum suggestion. In your case - at least 30 Mbps (but it depends how your users use Internet - Our internet proxies allow users only to browse. no internet email, no ftp and so on)


9. I don't like how SFARM is done, but my way would be more expensive, although more manageable and less cables would be required (2x stackwised 3750E in each rack, connecting with uplinks to 48x1Gig linecard on each 6506)

Their design is good, but there will be A LOT of cables. Unless you use good cable management system, there will be outages because someone disconnected wrong cable, someone stepped on the cable, someone broke the cable, someone fell into cables and strangled themselves... i've seen too much of this to go into it again without really good working cable management solution)


10. Good luck



Hope this helps.

Pavel Bykov Fri, 08/10/2007 - 07:19
User Badges:
  • Silver, 250 points or more

Also, I'm not sure of price difference between 3750 and 3560G. If it's not a lot, i would suggest putting stackwised 3750 in Edge Distribution instead of 3560G



And it seems to me, that you could be OK without "CORE" at all. Just connect DISTR. to S-FARM.

That's what we have, and it works really well.

Thanks for the detailed reply Slidersv. Took me some time to cross reference every issue that you mentioned and understand it from the 'need to have' basis within our environment and you are right on most counts.


Need a bit more detail on the below mentioned subject, I am also attaching the provided BOM to us.


I would address and try to understand the issues you highlighted in each of your reply:


1. Yes. I don't see the need for two SUP Vs. I guess we can live with a 4506 here with a single SUP V uplink currently to 1 Gig. Single SUP V would give us a capability of a 10Gig later...correct?


The issue that's still pertinant is that of PoE (inline power). Do you think I should make it a norm on all Inline cards of 4506? The cost difference is quite a bit and we don't yet use IP Telephony.


2. You mentioned about some problems you've had with 4506/07. Anything that I should be cautious about at this stage of designing?

Yes..again, I can live with one SUP V on each.


3. I see that 6506 is eos and has been advised to be replaced with 6506E. So is 6509. In the Bill of Qtys (BOQ) though (attached file) the guys are providing is with 6509E and that's a solace ;)

But, correct me if I am wrong, the only difference I see between 6509E and 6506E is that of 9 and 6 slots on the latter. Is there any other feature missing in 6506?

I noticed that in BOQ, the guys have mentioned only 'two' WS-SUP720-3B' for a pair of 6509 on core and Distribution. That means they are probably not supplying a redundant SUP here on any of the 6509s. But then, why so many slots? Beats me too!!


4. Connections to core, according to diagram would be only 8, right!!? Two from the each of the four neighboring switches of Distribution and server farm.

In the BOM, the guys have incorporated this on each of their Core 6509E :


a. One 8 port 10GB Ethernet Module

b. Ten, 10Gbase - LR X2 Modules


Do you see any need based upon this BOQ?


5. You suggest that we incorporate L3 upto access layer. Please elaborate on this aspect. Do you mean that 6509 doesn't support L3 functionality on it's base IOS?


6. Same on this point too. I didn't quite get. Would appreciate if you can tell me the means of going about achieving this.


7. This is Cisco suggestion to connect my Edge to Distribution and then to Core within the same building. I am still a bit confused as to the nee to bring in Distribution into picture? Why does someone use Distribution?

(*I know it's going to be another long reply from your side but please do hold on to your keyboard one more time*)


8. Another point we need to consider soon. Our Internet speed is indeed slow but then we are still about 2000 user base with not all given to browsing and none Internet mail.


9. For Server farm, I am beginning to look beyond Cisco. How about a third party Unified Threat management system. Something of the likes, Fortinet, Sonicwall, Cyberoam etc.


The servers would then live behind a UTM, connected to stacked switches(least oversubscription) on each rack. All users have this UTM IP as their gateway. This way, I have servers protected again by an Antivirus, IPS solution.


Have you come across such an arrangement? Is this workable? You suggested that I use 3750E on each rack, are there any oversubscription caveats here?


Lastly, you suggested using 3750 instead of 3560G, why was that?


Thanks a lot for your earlier reply.



Attachment: 
jwdoherty Fri, 08/10/2007 - 09:10
User Badges:

Not clear what advantage the 3560Gs provide. Why not connect directly to the 6500s? If there are other devices to connect, for a L2 fan out, a stack of 3750Gs would be better. Also, even if you need the separate fan outs, can the physical topology just have one 3750G stack for both the PSTN and Internet connections?


Instead of 4507s use stacks of 3570s, 3570Gs or both depending on whether you want to provide any gig to the desktop. Gig uplinks, initially, will probably be fine. If more bandwidth needed, Etherchannel or add 3750-E to stack and use 10 gig. (Note you can do L3 on 3750s with a software upgraded - not tied to specific sups as with 4500 series.)


For server side, instead of 6513s, use stacks of 3750-Es with 10 gig uplinks (assuming servers will have gig). You could either just have dual stacks to directly replace the shown 6513s or one stack per server row.


Collapse the distribution and core 6509s into a single pair (which would be the only pair of 6500s). Insure line cards have DFCs and use sup720.


Lastly, used the least expensive medium or module that supports the bandwidth required. E.g. copper 10 gig for cross connects between 6509s.

Joseph, For some reason I haven't seen Firewall's 'Inside' interfaces going directly to core. Your post got me thinking why that isn't done usually though.

Although, your point about using same stack of 3750Es for PSTn and Internet connections is fair.


2. I guess what you meant as a replacement of 4507s was 3750s (probably a typo there '3570G'), for a I failed to find this model on cisco or maybe I didn't look hard enough.


My query is, using stacked version of 3750E vis a vis chassis of 4507R, what all advantages do I get apart from obvious price factor?


3. On edge, I get L3 on 3750, how can I utilise it to further increase my network performance?


4. If I use stacks of 3750Es with 10G uplinks, what's my oversubscription? Do I get jumbo packets supported for future NAS/SAN coming into the server farm?


5. I am in complete agreement with your idea of collapsing Distribution and Core layer to one. I am only wondering, why and where does someone use demarcated core and distribution?

What's are the advantages of such a design that this guy is providing us with?


Your point about using DFC on line cards is real good one. I have attached BOQ in my earlier reply. Pls take a look. I guess their line cards are supporting DFC and they have Sup 720 on 6509s.


Thanks again. Awaiting your insightful reply.


jwdoherty Mon, 08/13/2007 - 07:13
User Badges:

#2 The 3750G is one of the 3750 models; those with 10/100/1000 ports.


3750E vs. 4507R (BTW I suggested 3750 or 3750G for user edge, 3750-E for server edge) - advantages: performance, more ports, redundancy w/o need for 2nd sup (frankly although the 4500 series is solid, it's getting a bit dated, especially with the chassis fabric's performance)


#3 yes you can do L3 on 3750s, and I'm not recommending you do or don't, but with regard to question on performance, the major benefit would be to route within the stack between subnets without having to transverse uplinks/downlinks. Usually not needed on the user edge, more of a possible benefit on the server edge.


#4 Your oversubscription is up to you. With a single 24 port 3750-E, best case 2.4:1. You can maintain similar ratio from a stack if you use other stack member 10 gig ports. If you use 48 port models, then the ratio becomes 4.8:1


The 3750-E supports jumbo frames (9216 bytes).


#5 The primary advantage of the classic 3 layer hierarchal design is it allows scalability. This is still true but what often seems to be overlooked is how much the performance of current gen equipment has improved. Ask those who proposed the designed to explain why they want it. Hopefully the answer will not be that's the way we always design for this size.


A nice Cisco paper on gig design can be found here: http://www.cisco.com/en/US/netsol/ns340/ns394/ns74/ns149/networking_solutions_white_paper09186a00800a3e16.shtml


What I'm suggesting is "Collapsed Backbone?Small Campus Design" might be suitable for your existing building using current gen equipment.


Yes, I do see DFCs on the BOQ. They would be for the 6748s since the 6708s have them built in.


I was surprised to only see LR modules for 10 gig. As I noted in my prior post, using the minimal modules for the distance can save quite a bit of $.


Also if your concerned about performance and oversubscriptions, the proposed 6513s do not offer 40 gig to most of the slots and all the 8 port 10 gig cards are oversubscribed about 2:1.

jwdoherty Mon, 08/13/2007 - 16:37
User Badges:

Two other thoughts came to mind.


First, do you really need/want dual 6509s or would a redundant 6509 be better? There are pros and cons for both.


Second, if you go with a collapsed backbone (either single or dual), normally only other fan out network devices would connect to it/them. However, you can make special exceptions for very high volume "user" devices. E.g. SAN or perhaps very busy servers such as corporate mail server(s).

Something caught my eye just recently, the two ASA 5540's are shown as dual homed in the design? Is this some kind of a new capability in ASAs?


Now, do i really need dual 6509s or redundant, that's a tough one. I thought it was a redundant design and not dual. On the contrary, I was planning to distribute the VLAN load on both 6509's using RSTP and that would also cater to redundancy.


Your thoughts?


Btw, one of my friend reveals that they use Nortel and Nortel has a proprietary protocol that shares the load across the two cores and seemlessly failsover in case one of them goes down. Does Cisco have something like that?


Pavel Bykov Tue, 08/14/2007 - 10:42
User Badges:
  • Silver, 250 points or more

1. Correct. As for PoE, if you are planning to use IP telephony (in the future, not now) go for POE, since cards are expensive (as you mentioned) and it would be a waste to replace so many linecards. It all depends on your future plans. Just like SUP V is a good idea.


2. Well, the early models of SUP V had bad memory, that made them lock up. We had one 4506 backplane failure (second SUP wouldn't save from this). We also had one 4507 that set itself on fire (!!!), luckily fire protection turned off electricity in time (the connector to SUP started to short circuit)


3. You shouldn't go for so many slots if you aren't planning to use them. There has to be a purpose or a future plan to use them.


4. Right. I think you can go for design without CORE at all.


5. There is L3 functionality, but Layer 2 creates an additional point of failure, does not easily support load balancing (Blocking state of STP), and is slow to converge sometimes. Also creating local L3 domains reduces broadcast domains and isolates failures (we had big L2 failures, where our whole building network collapsed because of one bad PC NIC card, or where it collapsed because of L2 information processing overload - too many trunks). It's just a good idea to route VLANs (create SVI) right on 4507


6. Create point to point VLANs, with mask like 255.255.255.252 and enable routing protocols. You will then have redundancy, fast convergence, manageability, failure isolation, and load balancing. It's all CEF, so it's not going to be slower than L2 in any case.


7. Distribution layer is a design guideline. It's done for manageability and control.


8. 2000 users with internet connection should have around 16Mbps


9. No experience with that. sorry.



3750 has gigabit ports. It has similar design as 3560, but has STACKWISE. That is a very good technology for resiliency, redundancy, manageability, and so on. You see all stacked switches as one switch. And convergence is extremely fast.



hope this helps.

Please use rating system to rate helpful posts.

Hi Pavlo


Once again, thanks for the reply and I have rated it duly.


Seek your advise on three points:


1. Say, on tenth floor I just create single subnet vlan, 10.10.10.* and my 4507 has an SVI of 10.10.10.1/24 defined on this. On ninth I have an SVI of 10.10.9.1/24.

My udnerstanding is that L3 routing for all traffic would be defined on the Core.

According to your Point No. 5, how would I go about alternatively, if I do L3 on my edge?


2. Please elaborate on your Point No.6 in a bit more detail with an example, like the one of ninth and tenth floor above. I am a bit novice on this issue of P2P VLANs.


Rgds




Joseph W. Doherty Sat, 08/25/2007 - 13:23
User Badges:
  • Super Bronze, 10000 points or more

If each uplink path is defined as a routed link (the purpose of the point-2-point vlan on that link), then the routing protocol has two paths to utilize and, depending on the routing protocol, will use both. This is similar to traffic flow between the distribution and core 6500s in your original diagram.


(For the edge, easy usage of both links might also be accomplished with some of the newer L2 technology, e.g. GLBP.)


Some other advantages of edge routing:


o Routed links are easier to traffic engineer. E.g. if you want to dedicate one link for certain traffic.


o If you see much traffic flowing between the 9 and 10 floors, and they are separate subnets, you could interconnect them with an additional routed link or links. Most routing would take the shorter path, bypassing the upstream router. I.e. one less physical hop.


o If you place multiple subnets on the same floor, routing on the edge can pass traffic between them without the need, again, to transit the upstream router.


The L2 issues that Pavlo mentions are very real. However, they usually arise when you extend VLANs across the network, e.g. the same subnet on floors 9 and 10. If you restrict VLANs to just one edge device, and route everywhere else, and don't have many traffic flows between same side edge devices, edge routing isn't as critical, and the choice between them becomes difficult.


I would agree that edge routing may better position your network for the future, but since your initial post did ask about economy, do keep in mind edge routing usually comes at an additional cost, either for hardware, software or both. However, if you chose not to do edge routing, I suggest selecting equipment that can easily be upgraded to do so.


(Pavlo, hope you don't take offense at my also answering Fahim's questions. Did want to inject the cost consideration.)


jwdoherty Wed, 08/15/2007 - 06:53
User Badges:

I'm not familiar with the ASA 5540s, so unable to comment on whether they can be dual homed.


Distributing the load across multiple 6509s is often dependent on how you configure your topology, for instance whether it's L2 or L3. Keep in mind often the bottleneck in performance isn't the 6509 itself, but the uplink bandwidth not being fully utilized across multiple paths. Cisco GLBP can be attractive for that at L2.


With regard to your last question, I'm unaware of any publicly announced feature that does anything like you describe.

steve0miller Thu, 08/16/2007 - 18:17
User Badges:

There's alot to like about this network design, but if your requirements are "simple" then this is kinda overkill.


Access Layer:

- If you will have approx 250 nodes per floor, then the 4507R's will not scale enough for your needs. 4507R's only have a max of 240 copper ports. If your requirements trully are simple, then redundant Supervisors and 10 gig to the distros is too much. I'd go with Catalyst 3750-48PS or 3750G-48PS. This assumes you want PoE for VoIP phones & Wireless in the future. Catalyst 3750 can scale up to 432 when stacked.


Distro Layer:

- I agree with Slidersv, 6509E's on the distro is too much, go with the 6506E's, they are cheaper and will have more than anough capacity for your current and future needs, specially if you get the WS-X6748-SFP line modules, which is a 48 port SPF blade and are fabric enabled. Also go gigabit from the building distros to the core, but bundle a couple of gig ports in Etherchannel bundle for greater bandwidth.


- I too also think it's awkward to have the Enterprise Edge module hanging off the building distros. Ask your integrator why they do it this way and report back to us, i'd like to know. I've worked for some very large networks and the provider edge and other modules have always hung off the core, not the building distros.


Core Layer:

- Core looks fine, maybe that too can be a 6506E instead of a 6509.


Server Farm:

- I like the server farm module, i would not change anything. Keep the 10 gig between the server farm distros and the core, this will future proof your investment.


Enterprise Edge:

- Why do they have 2 3560G's right before the ASA firewall? They can connect the ASA devices right to the distros/core.

- Agreed with others also, your internet links are too slow.


Looking at the BOM, they have redundant supervisors on all 6500's. I personally think that sinse will have dual 6500's on the building distro, core, and SF distros, you don't need dual supervisors in each chassis, one will do. Also, if this network is going to be all Cisco, why go with OSPF as your routing protocol? My preference would be EIGRP.


justin.stevens Fri, 08/17/2007 - 04:42
User Badges:

Like the guy above at first glance I like this design. It's very "leading" Edge etc buecause most customers don't do 10gig through out the network. With that 10Gig has been out for 4-5 years now and your building a new network so I say go with it.


Before I say anything there are multiple ways to skin the cat. It comes down to the Network Managers/Engineer and their desires. And it it comes down to budget.


Access-Layer ==> There is no right or wrong, 3750, 4507's, heck for customers that have the money I've seen 6513's in the IDF/Closets. What do you want?


Distro/Core ==> Collapse/don't collapse it's a budget thing. Can you affor 4 boxes or only 2?

The answer isn't a Layer 2/3 issues it's Layer 8 or 9. (politics / budget) :). Cisco recommends the ol Hierarchical Design Model and now the new Enterprise Composite Network Model (for more then Campus and bigger).


Serverfarm ==> Go with the 6513's


Core ==> Go Layer w/ Layer 3 /30 point to point links. Anyone that doesn't is missing the boat these days. The trend is to get ride of L2 win the Distro/Core and now even the Access Layer. Why bother with the hassle of trying to make sure your Layer2 settings match you L3 these days. STP,RSTP,Uplink,Backbone fast minimize that stuff.


Routing Protocal ==> Eigrp has an easier learning curve. OSPF is your sticking in someone elses equipment so that you can work and play well with each other. But why would you do that with when you've got a chance to have one solid new network that can be supported end to end with one TAC call. Avoid all of the point the finger stuff and stick with one vendor.


Firewalls ==> it's a speciality go with what your inhouse security team is trained it and can support so that you won't feel the pain when something isn't setup correctly. Pix and now ASA's are easy to use. Don't do Sonicwall and that other stuff (that's a save money approach) The other day I was with a customer and they have the all-in-one Red firewall (sorry I can't remember the name) There was an issue and I had to call INDIA for support and got the run around. The facts are the two best firewalls that most people buy are Checkpoint and ASA.


PS. move the links from the Enterprise Edge links to your Campus Backbone/Core. My guess is the guy made a mistake. That diagram is very well done the person who made it knows what they are doing. 10gig Uplinks if you can afford it go with it. If not back them down to (two) 1-Gig links and eitherchannel from there. you can get up to 8 if you need the extra b/w.


At the end of the day we all follow guidlines and use Tools. Cisco offers to sell us everything that what makes them great. End-To-End they are a one stop shop with great support. (Not to say that their Tier1 support is the fastest and they they can solve 100% of your probs with out going to level 2) but...


Good luck

anandramapathy Sun, 08/26/2007 - 18:36
User Badges:
  • Bronze, 100 points or more

Thanks for that. Can you recommed the best Resource / Book for this kind of Design which would cover end - end.


I have been wanting to master this for sometime now.

pkaretnikov Wed, 08/29/2007 - 12:15
User Badges:

My 2 cents:


I agree with the collapsing of the CORE and Distribution layers but I would recommend that you stick with 4507Rs at the edge. The redundant power supplies are a lifesaver. I know it's a rarity that they fail, but switching out an UPS or a power cable getting pulled out accidentally is a huge problem for end users in the middle of the day. You don't need the redundant Sups but having the capability in the future is always good. Floors with key personnel and the big bosses ought to have that extra level of service.


Recently I had my network "Upgraded"(read as downgraded) and 6509s were switched to stacks of 3750s and its been nothing but a pain. It hasn't been a performance hit but the redundant power is sorely missed. Providing all of the closets with enough RPSs would be too expensive so now when an UPS needs to be replaced we have to lug around RPS 675s...

The consultant has returned with a new diagram.


Now with all these inputs I went back to our consultants and appraised them of our needs. I do know some of you have been in favour of 4507s but I thought of trying out the stacked 3560G for easy manageability. I will ask the guys if it comes with redundant power supplies.


So now, they have come up with another updated design.


The guys still insist that they cannot do without Distribution and a Core. Last time our general consensus was that we are being oversold things we don't literally need, for example a pair of C3560G on two sides of the previous design.


I also can't seem to make much sense out of the Server Farm switch, C6513. If I include the FWSM and IDSM modules, would I not get a performance degradation despite my 10G connectivity of C6513 with C6509s?


They have downgraded ASA from 5540 to 5520 and maintain that it will have no effect.

Enterprise edge routers have also been downgraded from C3825 to 2821. Wonder what effects this would have?


Any other comments on this design please?


Now my main worry has shifted from design aspect to that of the prospect of being 'oversold' / 'undersold', in looking at complete redundancy and security.


Pls advise, on where all can I make a safe cutdown? Thanks to you guys for all the inputs. I am so glad that I came here for expert comments.




Attachment: 
Joseph W. Doherty Wed, 09/05/2007 - 15:59
User Badges:
  • Super Bronze, 10000 points or more

3560G or 3750G? The former doesn't stack.


Neither comes with redundant power supplies, although a separate RPS unit can be used. If used, one RPS usually covers up to 6 devices; for single unit power module failure. Coverage often will not support multiple unit failures, such as failure of power supply source.


Seeing how you have gig for user ports, and if you stack, were you going for 10 gig uplinks or Etherchannel gig? Depending on number of links in Etherchannel, the cost of the interface modules might make the 10 gig less expensive.


If you desire 10 gig, you'll need the 3750G-16TD, which goes EOS 10/15/7 or the 3750-E. (I would recommend the later for the uplink switches.)


"The guys still insist that they cannot do without Distribution and a Core."


I see 6 10 gig links connected to each core. So you could ask "the guys" how the core 6500s provide benefit vs. just connecting the "distribution" 6500s to the server 6500s. Port count wouldn't change on either of those.


If the "core" 6500, indeed, only has 6 10 gig links, and is only routing between those links, why you could even replace the pair with a stack of 4 3750-Es. This provides only 4 10 gig connections; the stack eliminates the need for the dual 10 gig cross connects.


I too might be concerned about the performance impact of the FWSM (5 Gbps; up to 4 per chassis [20 Gbps]) and especially the IDSM-2 (500 Mbps; up to 8 per chassis [4 Gbps]) within the server 6513s. Much would depend on what traffic your going to run through them.


With regard to the ASA and router downgrades, much depends on what load they will be subject to. What's the expected traffic load for each?


"Any other comments on this design please?"


Conventional design, but still believe dual 6509s using sup720s and DFC line cards with 10 gig ports would work as a collapsed backbone. Use edge switches to feed to/from the 6509s' 720 Gbps fabric and the chassis 400 Mpps.


PS:

Consider, in your diagram of the proposed design, the server side 6513s carry all traffic except client to client. So assuming there is not a much of that, and if they can carry that load too, dual 6500s should be able to carry your whole infrastructure.

After months of negotiations, we are down to two options which I am finally seeking community recommendations on here at NetPro.

To start with, thank you all for your commments till now which have been very insightful.


Attached are the two option diagrams provided by our SIs, Option I and II.


* Option-I, gives me facility to use C3750E which starts with 1G uplink to core but is upgradeable to 10G in the future.

Core, distribution and server farm collapsed into two 6509s.

Dual C3750 at the edge providing connectivity with two ASA 5540 although I am yet to understand how they intend to multihome firewalls as they have depicted in the diagram of Option I & II.


* Option II, C 4507R at the access layer, with distribution and core separate (two 6509 each). There is a K9 Firewall Module on the two core switches in their BOQ.

The two ASA5540s are again connected the same way as before, multihomed.


Based upon the suggestions in this thread, the intention is to go for Option I, collapsed backbone, C3750E with dual 10G uplinks to core.


Your suggestions about the flaws, merits, drawbacks of Option I vis a vis II, are sought before we sign on the dotted line.


Regards.



Attachment: 
Joseph W. Doherty Sun, 10/14/2007 - 15:35
User Badges:
  • Super Bronze, 10000 points or more

"* Option-I, gives me facility to use C3750E which starts with 1G uplink to core but is upgradeable to 10G . . ."


"Based upon the suggestions in this thread, the intention is to go for Option I, collapsed backbone, C3750E with dual 10G uplinks to core."


Your may run out of ports moving to 10 gig unless you also place 3750-Es facing the servers.


On the user facing side, you may want to consider using 3750Gs for non-uplink stack members.


If your budget was able to support all the 4500s and multiple 6500s but use 3750s instead, you might want to consider going with 10 gig now for the uplinks. You might also want to consider using dual sups for NSF.

Thanks for the reply Joseph. If I go for 4507R vis a vis 4506, I get the same number of line cards(5) but Single SUP support. Which one is better?


Also, 4507R & also 4506 models seems to support: [Supervisor Engine II-Plus, II-Plus-10GE , IV, V, V-10GE.]


What should be my basis to choose SUP engine? The vendor in his BOQ has mentioned "Catalyst CAT 4500 Sup II + 10GE".

Is this OK!!??

Joseph W. Doherty Fri, 10/19/2007 - 16:02
User Badges:
  • Super Bronze, 10000 points or more

"If I go for 4507R vis a vis 4506, I get the same number of line cards(5) but Single SUP support. Which one is better?"


The "R" chassis is "better" if you want to provide a redundant sup. Without it, if the sup fails, the chassis fails. Unless your user edge is critical, single sup chassis might be fine especially if you keep one spare on hand to cover all the 4506s.


The "Catalyst CAT 4500 Sup II + 10GE" is primarily a L2 switch with some L3 capabilities, see: http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd80356bde.html. The Sup V-10GE is a full L3 switch, see: http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd801c5c66.html and is a bit faster and has additional hardware features too. The Sup II should be fine if you just plan on doing L2 at the user edge. However, unlike the 3750s, you need to replace the sup to get full L3, software upgrade won't do. For the user edge, I would expect almost all traffic to be to/from the uplinks, so full L3 isn't likely to be required.

Joseph..your replies had been most helpful and that's one of the reasons I keep rating and keep posting again and again.


One last query for the week probably :)


On the server edge, I do not want my servers to come and connect directly into my core for purposes of manageability and fault finding primarily. I am looking at about 100 servers max over a period of 5 yrs including high capacity bandwidth hoggers File and DB servers. I am contemplating on what to choose for my server farm aggregation switch model.


I have three options there:


1. I connect my servers to a 4507R , redundant SUP V- 10GE full L3 capabilties, with an uplink of 10GB (redundant) to core


2. I connect my servers to stackable 3750E with 10G uplinks to core.


3. I connect my servers to catalyst 4948-10GE with Enhanced ES image loaded and 10GB uplinks to core.


Which one of the above is the best option for me/ what should be my criteria? Oversubscription caveats, costs? How do I cater for port redundancy?


I think 3750E would be the cheapest with 4507E costliest.

Joseph W. Doherty Sat, 10/20/2007 - 07:02
User Badges:
  • Super Bronze, 10000 points or more

For the server edge, and if we're dealing with gig edge ports, the 6 Gbps per line card fabric connection concerns me for high density port cards (e.g. 48 gig ports) on the 4500 series.


The 4948-10GE has much going for it but could force you to use lots of 10 gig ports, either to your core 6509s or even daisy chained among a 4948 stack.


The 3750-E is the most interesting option because of it high speed stack option. For maximum performance, you could have a single unit uplink to the two core 6509s (similar to using 4948). Or, you could build a stack of 3 units, and you performance similar to a 6505 (no such chassis) with duel sups. Or, you could build a stack of 9 units and are somewhat like a 6513 with dual sups. Additionally, one size doesn't have to fit all. You can multiple different size stacks.


Another interesting option with 3750-E stacks, assuming you only start with dual uplinks for the stacks, the additional 10 gig ports can be used either a channel bundles to increase bandwidth to/from the stack and/or as 10 gig ports to selected hosts.


With regard to port redundancy, you can connect servers to different 3750-E either within the same stack or different stacks. The former will be a little less expensive.


Also with the 3750-E, you could start with just the basic image until you see a need for full L3 at the edge. If you do move to L3, again, you don't have to do it (and license it) for all edge stacks if unneeded.


The biggest caution with the 3750-E is it's a very new device and doesn't have the track record of the 4500 or 6500 series. However, Cisco, I think, is one of the best vendors in working to make it right.

"For the server edge, and if we're dealing with gig edge ports, the 6 Gbps per line card fabric connection concerns me for high density port cards (e.g. 48 gig ports) on the 4500 series."


Joseph, this is interesting. I didn't know line card throughput also mattered. I knew that by using SupII+ 10g i got 108gbps fabric connectivity and by using SupV 10g on 4507R chassis, I got 136 Gbps.


Now, if I only get 6gbps per line card that means a max of 30gbps off 5 line cards of 48Gbit ports/line card.


What advantage does using SUPII+ or SUP V gives me in terms of switch fabric throughput?


Also, I searched hard through datasheets pertaining to 4500 series but couldn't fig my teeth into this particular info of so much relevance, i.e. 6gbps per line card on 4507R. Where can i find this?


Joseph W. Doherty Sun, 10/21/2007 - 05:07
User Badges:
  • Super Bronze, 10000 points or more

See table 1 in http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd802109ea.html, and look at "wire rate" column. For one example see the WS-X4548-GB-RJ45 it notes "8-to-1". Or for another example see "Bandwidth is allocated across six 8-port groups, providing 1 Gbps per port group" within the description of "Figure 24. WS-X4548-GB-RJ45".


See table 2 within http://www.cisco.com/en/US/products/hw/switches/ps4324/products_data_sheet0900aecd801792b1.html. Note both the 4506/4507R with either the Sup II-Plus-10GE or Sup V-10GE are listed as 108 Gbps, 81 mpps. For bandwidth, take the 30 gig (6 Gbps x 5 slots) plus dual 10 gig uplinks gives 50 gig. For full duplex double it for bandwidth, and you need 100 Gbps.


"What advantage does using SUPII+ or SUP V gives me in terms of switch fabric throughput?"


Either will provide wire rate performance for supported chassis within chassis limitations.


PS:

6500 series using sup 720 provides either 20 or 40 Gbps per slot depending on the chassis. So for instance, 48 gig port cards can't deliver full rate to/from card.


3750-E Stackwise+ is listed at 64 Gbps but it's really 32 Gbps (full duplex); compare against 4500 6 Gbps or 6500 20/40 Gbps.


PPS:

3570 Stackwise is listed at 32 Gbps but it's really 16 Gbps (full duplex).

In light of this new bit of info, I might need to reconsider our decision to go with stacking 5 x 3750Es within each IDF as compared to 4506.


You mentioned that 3750E-48PD Stackwise+ is listed at 64 Gbps but it's really 32 Gbps (full duplex) despite having 128-Gbps switching fabric; when we compare against 4506 with SUP II + 10GE, I get 108 Gbps but according your analysis, I am only effectively getting 6 Gbps connectivity per line card (X4548-GB-RJ45; Five of 'em) to fabric.


Can I safely deduce that without considering uplinks to core, which will be same in either case, I am about 2Gbps better (throughput wise - 30Gbps vs 32 Gbps) with stacked 3750E vis a vis 4506/7R with 48port, 1gig Line cards in five slots?


Joseph W. Doherty Mon, 10/22/2007 - 11:37
User Badges:
  • Super Bronze, 10000 points or more

"In light of this new bit of info, I might need to reconsider our decision to go with stacking 5 x 3750Es within each IDF as compared to 4506."


What information is causing you to reconsider?


"You mentioned that 3750E-48PD Stackwise+ is listed at 64 Gbps but it's really 32 Gbps (full duplex) despite having 128-Gbps switching fabric; when we compare against 4506 with SUP II + 10GE, I get 108 Gbps but according your analysis, I am only effectively getting 6 Gbps connectivity per line card (X4548-GB-RJ45; Five of 'em) to fabric. "


Correct, although the 128 Gbps switching fabric is per each 3750-E, while the 4500's 108 Gbps is the fabric for the whole chassis. (Similar issue with pps.)


"Can I safely deduce that without considering uplinks to core, which will be same in either case, I am about 2Gbps better (throughput wise - 30Gbps vs 32 Gbps) with stacked 3750E vis a vis 4506/7R with 48port, 1gig Line cards in five slots?"


No because comparing bandwidth between 4500 fabric and 3750-E is a bit like comparing apples and oranges. The 4500 line card supports 6 Gbps to a fabric while the 3750-E supports 32 Gbps to a ring. From this perspective, the 3750-E provides 5 times the bandwidth to its stack member vs. what the 4500 does to a line card.


On the other hand, since the 4500 provides a fabric, the full 6 Gbps is dedicated to each line card, where on the 3750-E, you might need to jump between non-end point stack members sharing the bandwidth between such members. The more members in the stack, the more likely effective ring bandwidth will be reduced.


For an edge device, assuming most traffic will transit the uplinks, the 3750-E appears to be a better fit since the bottleneck will likely be the uplinks, not bandwidth to/from the line cards.


For a distribution or core device, or an edge device with an unusually high many-to-many traffic distribution, a fabric architecture will likely be better (e.g. 4500, 6500 with sup 2 and SFM, or sup 720).


PS:

Another possible contender for an edge device is a 6500 with sup32/sup32-PISA. This architecture uses the 6500's 32 Gbps shared bus. Compare its bus architecture against 3750/3750-E stack.


Since the 3750-E supports local switching and each has an 128 Gbps switch fabric, placing hosts that need to communicate much between them on the same 3750-E would help maximize their performance. Since the 3750-E ring also supports spatial reuse, placing such hosts in adjacent stack members might also help performance.

Joseph, thanks.


Now the cost factor needs to be chipped in. I realised that a stack of five 3750E within each IDF is costing me about $200K more than, if I use 4506 across all floors. I was till now under the impression that chassis is costlier than stack.


How about, if I use 3x3750G and 2x3750E (top and bottom) to uplink the 10Gb to core within each of my IDF?


Unlike 3750E at 64gbps and forwarding rate of 101.2mpps, 3750G is listed at 32 Gbps with 38.7 mpps.


Now when I stack these two together with 3750Es doing the uplinks, what would be the net drop down in efficiency?

Any other problems in future, that you foresee with such a mix n match?


PS: I am afraid if I consider, as per your alternative suggestion, 6500 at the user edge, the price will shoot up even further..

Joseph W. Doherty Tue, 10/23/2007 - 04:19
User Badges:
  • Super Bronze, 10000 points or more

For user edge, although the 4506 offers less raw bandwidth to the line card's ports than a 3750-E stack, I think it would be good enough. I don't see the 8 to 1 line card subscription as a real problem issue. One major feature is loss of the redundancy without a 4507R and without second sup vs. the stack. The second major issue is the decision, when using 4500s, between using the sup II-Plus-10GE vs. sup V-10GE. How does the 4500 sup choice impact your cost?


With regard to a server edge, then I think the raw bandwidth issue becomes more of an issue.


With regard to mixing 3750Gs and 3750-Es in the same stack, this can be done although you lose Stackwise+ bandwidth and its spatial reuse (and performance of the 3750-E). However, since I believe a user IDF usually isn't as demanding as a server edge, 3750Gs alone are also probably good enough. The problem is how to provide 10 gig uplink. There was the 3750G-16TD Switch but it's EOS. This only leaves using the 3750-E to obtain 10 gig, which is more expensive. (So was the 3750G-16TD.) Using one at top and bottom is one method. (It's a design I suggested, and was accepted, at a current customer, although the IDF stacks are up to the max of 9 units.)


With a mixed stack, you have can also use just one 3750-E as a head switch (i.e. one 3750-E and four 3750Gs). You can run the dual 10 gigs from it. If you worry about its failure, its a situation like a 4506 with loss of sup. However, if you have a third fiber run, you can run a "failover" gig connection from the bottom of the stack. (Another option is to just run one "primary path" 10 gig from the head 3750-E and one "failover path" gig from the bottom 3750G. You give up 10 gig, but for a user IDF, probably not really necessary. Saves against cost.)


PS:

The mention of the 6500 with sup32 for IDFs was to contrast its bus architecture vs. stack ring. I.e., the bandwidth of a fabric isn't always necessary.



Actions

This Discussion