Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
Search instead for
Did you mean:
Campus Network Design Guideline
Building a Campus network is more than only interconnecting physical network infrastructure devices. The most challenging and important part of it is the planning and design phases where different technical variables and technologies need to be considered that could even effect the product selection and the design entirely. Also a good design is the key to the capability of a network to scale. This guideline will discuss some of the technologies and design considerations that need to be taken into account during the planning and design phases to design a scalable campus network
Although this guideline is generated based on Cisco’s recommendations and best practices, however it is not a Cisco’s official document. It is recommended to refer to some of the cisco design guides referenced in each section in this guideline for more details
Campus Network Overview
A campus network is generally the portion of the network infrastructure that provides access to network communication services and resources to end users and devices that spread over a single geographic location. It might be a single floor, a building, or even a group of buildings spread over an extended geographic area
Common Campus network Hierarchical Design Models
Cisco’s hierarchical network design model breaks the complex problem of network design into smaller and more manageable. Each level, or tier in the hierarchy is focused on specific set of roles. This helps the network designer and architect to optimize and select the right network hardware, software and features to perform specific roles for that network layer
A typical enterprise hierarchical campus network design includes the following three layers:
The Core layer that provides optimal transport between sites and high performance routing
The Distribution layer that provides policy-based connectivity and control boundary between the access and core layers
The Access layer that provides workgroup/user access to the network
The two proven hierarchical design architectures for campus networks are the three-tier layer and the two-tier layer models
Three-tier layer model
This design model can be used in large campus networks where multiple distribution layer and buildings need to be interconnected
Two-tier layer model
This model can be used in small and medium campus network where core and distribution functions can be collapsed into one layer also known as collapsed core/distribution model
Modular Campus Network Architecture
By applying the hierarchical design model discussed above into multiple blocks within the campus network this will result in a more scalable and modular topology called “building blocks" which allow the network to meet evolving business needs. The modular design makes the network more scalable and manageable by promoting deterministic traffic patterns. Network changes and upgrades can be performed in a controlled and staged manner, allowing greater flexibility in the maintenance and operation of the campus network
As it shown in the figure above, a typical large Cisco modular Campus network consists of the fowling building blocks:
Core Block (required for large Networks only)
It provides a very limited set of services and is designed to be highly available and operate in an always-on mode. A separate core provides the ability to scale the size of the campus network in a structured fashion that minimizes overall complexity when the size of the network grows and the number of interconnections required to tie the campus together grow
The access-distribution block consists of two of the three hierarchical tiers within the multi-layer campus architecture: the access and distribution layers. There are currently three basic design models for the access-distribution block:
Virtual switch ( Recommended solution )
The main difference between the above models is where the Layer-2 and Layer-3 boundaries exist
For more details please refer to the following link:
The services block is a relatively new element to the campus design. As campus network planners begin to consider migration to dual stack IPv4/IPv6 environments, migrate to controller-based WLAN environments, and continue to integrate more sophisticated Unified Communications services, a number of real challenges lay ahead. It will be essential to integrate these services into the campus smoothly—while providing for the appropriate degree of operational change management and fault isolation and continuing to maintain a flexible and scalable design. As a example, IPv6 services can be deployed via an interim ISATAP overlay that allows IPv6 devices to tunnel over portions of the campus that are not yet native IPv6 enabled. Such an interim approach allows for a faster introduction of new services without requiring a network-wide, hot cutover.
Examples of functions recommended to be located in a services block include:
Centralized LWAPP wireless controllers
IPv6 ISATAP tunnel termination
Local Internet edge
Unified Communications services (Cisco Unified Communications Manager, gateways, MTP, and the like)
There might be multiple services blocks depending on the scale of the network, the level of geographic redundancy required, and other operational and physical factors
As described in Cisco’s Enterprise Campus 3.0 Architecture
The Data Center block of a campus network also known as “ server farm” can be considered as another block of the campus LAN that uses the same hierarchical design model, however in the data center there are some factors and design requirements that are different from a normal access-distribution switches design such as port capacity, ~0% of oversubsecription and more specialised services can be introduced like firewalling and loadbalcing services. For small and medium data center the collapsed design model ( two-Tier) can be used without the need to a dedicated data center core
Using Cisco's next generation data cneter switches “Nexus Series Switches” can significantly improve the performance, reliability and redundancy of the data center by providing
High performance switching and software/hardware redundancy
Non-blocking end-to-end topology with vPC technology
Support of smart data cneter interconnect DCI technologies such as OTV that provide the ability to expand layer 2 network over a layer link/cloud
Ability to provide end to end unified fabric of IP and fiber channel over Ethernet FCoE
Fabric Extender Technology, Cisco Fabric Extender Technology comprises of technologies that enable fabric extensibility with simplified management enabling the switching access layer to extend and expand all the way to the server hypervisor as the customer’s business grows
In a typical hierarchal campus network, the distribution layer/block is considered as the demarcation point between layer 2 and layer 3 domains where layer 3 uplinks participate in the campus core routing using an interior routing protocol IGP which can help to interconnect multiple campus distribution blocks together for end to end campus connectivity. As a result the selection of the IGP is important to a redundant and reliable IP/routing reachability within the campus taking into consideration scalability and the ability of the network to grow with minimal changes/impact to the network and routing design. Some of the factors that can be considered for slecting an IGP for a campus LAN:
Size of the network e.g. number of L3 hopes and expected future growth
Convergence time e.g. OSPF and EIGRP can converge during a link/path failure quicker than RIP
Support for variable length subnet mask (VLSM)
Support of route summarization
For more details refer to the following link, cisco Borderless campus design, routing resign principles:
Network devices/hosts connected to the access layer switches need to connect via IP to a gateway that provides (FHRP). In a hierarchical campus network if a virtual switch mechanism was not used at the distribution layer such as Cisco VSS, then the distribution layer switches need to provide the FHRP service e.g. HSRP.
For more details around FHRP refer to the following link:
In a modern Campus network the demand on having multiple logical groups such as users, services, applications..etc to be separated within the campus network for security and other business requirements is increasing. Network virtualization is the most suitable solution for this type of requirements where multiple logical isolated networks can be created over one common physical network.
Cisco network virtualization divides the network into three main logical areas:
The need of a highly available network is not a new requirement, however with the increased number of services and communications that utilise the underlying IP network infrastructure systems and network, availability become crucial and one of the main elements of the campus network that need to be considered during planning and design phases. The flowing three major network resiliency requirements as described by Cisco Borderless design guide 1.0 cover most of the common types of failure conditions. Depending on the LAN design tier, the resiliency option appropriate to the role and network service type must be deployed:
Network resiliency: Provides redundancy during physical link failures, such as fiber cut, bad transceivers, incorrect cabling, and so on.
Device resiliency: Protects the network during abnormal node failure triggered by hardware or software, such as software crashes, a non-responsive supervisor, and so on.
Operational resiliency: Enables resiliency capabilities to the next level, providing complete network availability even during planned network outages using In Service Software Upgrade (ISSU) features.
Although redundant components within a single device are valuable, however the best availability ratio can be achieved with completely separate devices and paths
According to Cisco Medianet QoS campus design, the primary role of QoS in medianet campus networks is not to control latency or jitter (as it is in the WAN/VPN), but to manage packet loss. In GE/10GE campus networks, it takes only a few milliseconds of congestion to cause instantaneous buffer overruns resulting in packet drops. Medianet applications—particularly HD video applications—are extremely sensitive to packet drops, to the point where even 1 packet dropped in 10,000 is discernible by the end-user.
Classification, marking, policing, queuing, and congestion avoidance are therefore critical QoS functions that are optimally performed within the medianet campus network,
Four strategic QoS design principles that apply to campus QoS deployments include:
Always perform QoS in hardware rather than software when a choice exists.
Classify and mark applications as close to their sources as technically and administratively feasible.
Police unwanted traffic flows as close to their sources as possible.
Enable queuing policies at every node where the potential for congestion exists,