cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1958
Views
5
Helpful
6
Replies

Bandwidth from Access Layer to Distribution Layer

Jason Jackal
Level 1
Level 1

Folks:
I am currently on Chapter 12 of “CCNP Switching 642-813, Official Certification Guide” ISBN: 978-1-58720-243-8. I am currently not grasping the three layers entirely, and I was hoping someone could offer insight in a different way.

I believe I understand, that switches in the Access-Layer can be layer2 devices (2950, etc), and devices in the Distribution Layer should be Multilayer devices such as Layer-3 switches (3750) and inter-vlan routing takes place at the Distribution layer. But what I do not understand – how does one account for bandwidth and traffic from the Access Layer switches to the Distribution Switches?

Let use a 24 port 2950 switch located at the Access-Layer. If everyone was online and communicating, the total traffic for the switch would be 4.8 Gbps. The latter is due to each port providing 100 Mbps but in Full-Duplex, so (100*2)*24. So, how does an engineer spec out the required uplink ports from the Access Layer to the Distribution?

I am sure this is easy; however, I am not getting the concepts. Any insight is great.

1 Accepted Solution

Accepted Solutions

Peter Paluch
Cisco Employee
Cisco Employee

Hi Jason,

First of all, do not sum together the upstream and downstream bandwidth requirements in switched Ethernet environment. The amount of data flowing upstream has no impact on the available bandwidth for downstream. A bottleneck for either flow direction would not be alleviated by keeping the opposite data flow smaller - so you should not sum together the ingress and egress traffic.

So if on a 24-port 2950 switch, every station was downloading data at the full rate, a total of 100x24=2.4 Gbps would be needed. Assuming you are using a 2950 switch with the two 1Gbps uplink ports, you could bundle them into an EtherChannel and have a theoretical throughput of 2Gbps in ideal case, closing to the required 2.4 Gbps.

It is seldom the case, though, that all 24 stations are pulling data at the full 100Mbps rate for an extensive period of time. It has been one of the key ideas in frame and packet switching networks that the data flows are inherently bursty - periods of data transmission are intermixed with periods of silence. This allows for network oversubscription to a certain degree with no significant ill effects because it is simply improbable that all devices will be requested the full bandwidth at the same time. Of course this depends on the actual nature of stations, protocols, applications and services being used, but it has shown to be quite well proven in data networks so far.

So - yes, the CCNP book you are reading is indeed talking about oversubscribing the link between the access and distribution layer. However, for common deployments, this oversubscription is typically not an issue.

Best regards,
Peter

View solution in original post

6 Replies 6

Peter Paluch
Cisco Employee
Cisco Employee

Hi Jason,

First of all, do not sum together the upstream and downstream bandwidth requirements in switched Ethernet environment. The amount of data flowing upstream has no impact on the available bandwidth for downstream. A bottleneck for either flow direction would not be alleviated by keeping the opposite data flow smaller - so you should not sum together the ingress and egress traffic.

So if on a 24-port 2950 switch, every station was downloading data at the full rate, a total of 100x24=2.4 Gbps would be needed. Assuming you are using a 2950 switch with the two 1Gbps uplink ports, you could bundle them into an EtherChannel and have a theoretical throughput of 2Gbps in ideal case, closing to the required 2.4 Gbps.

It is seldom the case, though, that all 24 stations are pulling data at the full 100Mbps rate for an extensive period of time. It has been one of the key ideas in frame and packet switching networks that the data flows are inherently bursty - periods of data transmission are intermixed with periods of silence. This allows for network oversubscription to a certain degree with no significant ill effects because it is simply improbable that all devices will be requested the full bandwidth at the same time. Of course this depends on the actual nature of stations, protocols, applications and services being used, but it has shown to be quite well proven in data networks so far.

So - yes, the CCNP book you are reading is indeed talking about oversubscribing the link between the access and distribution layer. However, for common deployments, this oversubscription is typically not an issue.

Best regards,
Peter

You have good chances that the load on even the main core links of a large campus network does not exceed 1 Gbit/s. But don't tell anyone. It would impact sales and our contracts. :)

Jason, Don't confuse yourself with Data Centre switching. The CCNP book you are reading discusses about oversubscription as a "worst case scenario" and Data Centre switching is the "worst case scenario". You haven't even scratched the surface of the different buffer memory of each switch and how it affects data traffic as a whole. :)

@Leo Laohoo

Yeah maybe I am getting ahead of myself. ++laugh++ But I can’t help it, and my mind just starts spinning, thinking of different ideas and concepts.

 

Thanks for mentioning Data Center Switching...but as you said…I just started larning.

@Peter Paluch

Thank you for such a detailed answer. I will take in account what you provided, and see if I can make better sense of it.

 

Thanks again

 

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

 

The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.

 

Liability Disclaimer

 

In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.

 

Posting

 

As noted by Peter, edge hosts don't generally all concurrently push/pull their full port bandwidth for substained periods.  However, host bandwidth usage often varies much by "kind" of host.  For example, many server hosts are "busier" than most user hosts, so when designing networks you normally design for lower oversubscription ratios for server hosts than for user hosts.  Old rule-of-thumbs ratios suggest oversubscription ratios of about 8:1 to 4:1 for servers, and about 48:1 to 24:1 for users.

 

Keep in mind that oversubscription ratios can be "skewed" by what the host is doing, i.e. not all server or user hosts have similar bandwidth demands.  For example, your primary mail server or primary file server might be much "busier" than other server hosts.  Likewise, some user hosts might be much "busier", for example, years ago I supported a LAN segment of CADD (20) workstations which had more traffic on their local LAN than the (2,000 user) corporate backbone.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card