cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1866
Views
10
Helpful
11
Replies

Core network comparison with collapsed core

wilson_1234_2
Level 3
Level 3

I recently had the opportunity to visit a company with about the same hardware as we have, but a different design.

The have about 300 people at their site, we have about 200

Their performance is terrible, ours is not.

Them:

They had a 6509E series pair with HSRP on their vlans, no QoS configured at all on the core switches.

No QoS on the data switches.

QoS on the voice switches.

All of their switches (access, distribution, core) are configured as layer 3 with EIGRP, and they have voice and data on seperate physical networks.

They have 3560 Access switches configured via port group to 3725 distribution switches, via port group to the 6509s.

The have each pc linked to a 3560, each phone linked to a different 3560.

Voice an data have their own distribution 3725s, voice and data link to core 6509E.

Each switch is a different vlan, so each data switch routes it's own vlan up to the distribution switch, each distribution switch routes all of the data vlans up to the core through the distribution vlan. The core then routes to the different vlans.

Same for voice, each voice switch has it's own vlan, routes to the Distribution vlan to the core.

Us:

We have 6509 (standard) configured as layer three, with QoS configured on the core.

We have most users linked directly to a blade on the 6509 with the ports configured with voice and data.

The PCs all plug into a phone.

All of the vlans are configured on the 6509s, and the access switches we have are trunked directly up to the 6509s and use STP to prevent loops.

There is no comparison in performance, we scream compared to them.

My questions are, is the acess, distribution, core scheme really needed for such a small number of people?

It also seems it would be better to remove alot of the layer three routing and trunk uplinks (even if the access, distribution, core scheme stays.)

Also, I think the fact that they have no QoS on the core is hurting their performance.

Are the aggregate ports doing anything to hinder performance compared to trunking the uplink?

Any thoughts on the above?

11 Replies 11

Leo Laohoo
Hall of Fame
Hall of Fame
The have each pc linked to a 3560, each phone linked to a different 3560.

This scheme is very rare but not un-heard of.  I've worked in an organization like this.  It's a very expensive setup (because you have to purchase two switches to do voice and data).  In our case it was due to the decision-makers unwilling to make a decision.

My questions are, is the acess, distribution, core scheme really needed for such a small number of people?

It depends.  It depends if you have the know how.  In reality at that small you can collapse distro and core together (6500) but leave access alone.  I've worked in an organization that has all three collapsed as one all because the "technical engineer" specializes in the 2900XL/3500XL-like environment.

Also, I think the fact that they have no QoS on the core is hurting their performance.

We don't have QoS in our network and response times are exceptional.  Our issues are poorly configured servers.  So QoS isn't the only thing that can cause issues.  Inproperly maintained routing can also cause issues.

vmiller
Level 7
Level 7

Leo has good points.

just from the reading this does sound overly complex for the size of the user community.

regarding QOS in the core. All the core need do is not remark any packet. They should be

"bit spitters" optimized for convergence.

Your design screams, beacause it lacks complexity. Always keep that as a design priciple.

Qos is not only for prioritization, but for buffering and queueing on the ports,what about buffering and queueing on the core? aren't these things only available if QoS is enabled?

If so, why is no better to have these things than not have them?

What about layer 2 switching on the access to core vs lay 3 routing from access to dist to core, it seems to me the former is going to be much faster than the latter, especially if there is buffering and queueing on the core.

I do not see the benefit to having  a dist layer on such a small network, it seems to add a step that has to make me wait for a routing decision.

Am I wrong?

I don't know if this is going to win any friends ...

Again, it depends.  One of the biggest problems with any networks (no matter the size or complexity) is "generation change".  Whenever a generation change occurs (i.  e.  someone who knows more about the network and systems) people left behind tend to forget about some of the most basic information, such as old configurations and old IOS.

I can't say what configuration differences you and the slow network has/have but I would like to compare those first.  Only then will you be able to determine if QoS alone can make any improvements.  Frankly, I think the answer may not be as complex as QoS and could be as simple as incorrect configuration of the EtherChannel (for example).

I'm not saying that I disagree with you because you are not wrong.  If this is brain-storming then your idea should be on the table for everyone to consider.

leo,

you ARE winning friends with me, you answers are exactly the reason I posted these questions and replies. I want to get the perspective of someone else.

Brainstorming is exactly what I am looking for and I appreciate you throwing you ideas in.

I want to know why and at the moment, I was thinking the things I mentioned, which are also some of the differences:

QoS, routing slower than switching and trunk over etherchannel.

Ether channel was one question in my original post, what could make it slower than a trunk link?

On the incorrect etherchannel config, can you give me an example of what you are talking about?

On the incorrect etherchannel config, can you give me an example of what you are talking about?

There was one time when we didn't know why the server was slow.  No one cared and no one bothered.  Then one day I checked the logs and showed that the interfaces were going up/down.  The server wanted LACP but the etherchannel wasn't.  Don't know why it negotiated in the first place.  Wierd but that was the only time I saw something like that.

another thing is the server side is on NIC teaming but the switch side isn't on an etherchannel..my view on that is that it's suboptimal besides the fact the CAM table keeps on flushing since the switchports arent in a portchannel..also a good question is what kind of poor performance are they having?

So, you dont think the fact that one network is switching traffic and the other one is routing through each layer, (access to distribution, distribution to core) ahas any bearing on the latencey?

IT seems to me, that since everyting on one network is being switched by the 6509 core, either to another port on the switch, or to the WAN router, as opposed to on the other network, each layer having to make a layer three route decision would make a difference. Maybe not a big one, but the route processor on the layer three switche is supposed to be much slower than the hardware switching of layer 2.

So, you dont think the fact that one network is switching traffic and the other one is routing through each layer, (access to distribution, distribution to core) ahas any bearing on the latencey?

In my personal opinion, if a network is designed right (and regularly maintained) doesn't make a difference.

For example, a lot of people will post here on whether to use a Layer 2 or Layer 3 LAN.  Let's say that you funding is no issue, my next question is what knowledge base do you have.  Because it's not easy to do Layer 3 in your LAN if you don't have the knowledge base to maintain a Layer 3 LAN.

Maybe your network runs like a well-oiled machine is becasue you maintain your network, like a well-oiled machine, to be like a well-oiled machine. 

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

When everything is working at best, on most modern L3 switches, there's very, very little speed lost doing L3 forwarding rather than L2 switching.  Again, that's at best, because L3 switches can encounter situations where L3 loses its hardware acceleration a little easier than L2 switches.

If your network requirements are such where a difference in latency of microseconds is important, then you'll most likely want L2.  E.g. difference in latency between store-and-forward vs. cut through switching.

From what you've described between the two networks, I would wonder whether they (network with L3 core) have been encountering unicast flooding.

The possible unicast flooding I have in mind is listed as case #1 in http://www.cisco.com/en/US/products/hw/switches/ps700/products_tech_note09186a00801d0808.shtml.

Look at Vmillers picture, how funny!

https://supportforums.cisco.com/people/vmiller

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card