cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8030
Views
0
Helpful
0
Comments
Julie Burruss
Level 4
Level 4

The following content  is taken from the Cisco  Press book NX-OS   and Cisco Nexus Switching: Next-Generation Data Center Architectures,  by Kevin Corbin, Ron Fuller, and David  Jansen.

Cisco built the next-generation data  center-class operating  system designed for maximum scalability and  application availability. The NX-OS  data center-class operating system  was built with modularity, resiliency, and  serviceability at its  foundation. NX-OS is based on the industry-proven Cisco  Storage Area  Network Operating System (SAN-OS) Software and helps ensure  continuous  availability to set the standard for mission-critical data center   environments. The self-healing and highly modular design of Cisco NX-OS  enables  for operational excellence increasing the service levels and  enabling  exceptional operational flexibility. Several advantages of  Cisco NX-OS include the  following:

  • Unified data center operating system
  • Robust and rich feature set with a variety of Cisco innovations
  • Flexibility and scalability
  • Modularity
  • Virtualization
  • Resiliency
  • IPv4 and IPv6 IP routing and multicast features
  • Comprehensive security, availability, serviceability, and  management features

One key benefit of NX-OS is the use of VDCs.  Cisco Nexus  7000 Series switches can be segmented into virtual devices  based on customer  requirements. VDCs offer several benefits such as  fault isolation,  administration plane, separation of data traffic, and  enhanced security. This  logical separation provides the following  benefits:

  • Administrative and management separation
  • Change and failure domain isolation from other VDCs
  • Address, VLAN, VRF, and vPC isolation

Each VDC appears as a unique device and  allows for separate  Roles-Based Access Control Management (RBAC) per  VDC. This enables VDCs to be  administered by different administrators  while still maintaining a rich,  granular RBAC capability. With this  functionality, each administrator can  define virtual routing and  forwarding instance (VRF) names and VLAN IDs  independent of those used  in other VDCs safely with the knowledge that VDCs  maintain their own  unique software processes, configuration, and data-plane  forwarding  tables.

Each VDC also maintains an individual  high-availability (HA)  policy that defines the action that the system  will take when a failure occurs  within a VDC. Depending on the hardware  configuration of the system, there are  various actions that can be  performed. In a single supervisor system, the VDC  can be shut down,  restarted, or the supervisor can be reloaded. In a redundant  supervisor  configuration, the VDC can be shut down, restarted, or a supervisor   switchover can be initiated.

There are components that are shared between  VDC(s), which include the following:

  • A single instance of the kernel which supports all of the  processes and VDCs
  • Supervisor modules
  • Fabric modules
  • Power supplies
  • Fan trays
  • System fan trays
  • CMP
  • CoPP
  • Hardware SPAN resources

This figure shows the logical segmentation  with VDCs on the Nexus 7000. A common use case is horizontal  consolidation to reduce the quantity of physical switches at the data  center aggregation layer. In this figure, there are two physical Nexus  7000 chassis; the logical VDC layout is also shown.

ScreenHunter_01 Nov. 03 09.15.gif

VDC Configuration Examples

This section shows the required steps to  creating a VDC; once the VDC is created, you will assign resources to  the VDC. VDC(s) are always created from the default admin VDC context,  VDC context 1.

Note: The maximum number of  VDCs that can be configured per Nexus 7000 chassis is four; the default  VDC (VDC 1) and three additional VDC(s).

This example shows how to configure the VDC  core on Egypt:

egypt(config)# vdc core

   Note:  Creating VDC, one moment please ...

   egypt# show vdc

vdc_id    vdc_name     state     mac


1         egypt        active    00:1b:54:c2:38:c1

2         core         active    00:1b:54:c2:38:c2


egypt# show vdc core detail

vdc id: 2
vdc name: core
vdc state: active
vdc mac address: 00:1b:54:c2:38:c2
vdc ha policy: RESTART
vdc dual-sup ha policy: SWITCHOVER
vdc boot Order: 2
vdc create time: Mon Feb 22 13:11:59 2010
vdc reload count: 1
vdc restart count: 0
egypt#

Once the VDC is created, you now have to  assign physical interfaces to the VDC. Depending on the Ethernet modules  installed in the switch, interface allocation is supported as follows:

  • In the 32-port 10-Gigabit Ethernet Module (N7K-M132XP-12),  interfaces can be allocated on a per port-group basis; there are eight  port-groups. For example, port-group 1 are interfaces e1, e3, e5, e7;  port-group 2 are interfaces e2, e4, e6, e8.
  • The 48-port 10/100/1000 I/O Module (N7K-M148GT-11) can be  allocated on a per-port basis.
  • The 48-port 1000BaseX I/O Module (N7K-M148GS-11) can be allocated  on a per-port basis.

In a future module, N7K-D132XP-15, the  interfaces will be allocated per 2 ports per VDC.

Note: It is not possible to  virtualize a physical interface and associate the resulting logical  interfaces to different VDCs. A supported configuration is to virtualize  a physical interface and associate the resulting logical interfaces  with different VRFs or VLANs. By default, all physical ports belong to  the default VDC.

This example demonstrates how to allocate  interfaces to a VDC:

egypt(config)# vdc core
eqypt(config-vdc)# allocate interface Ethernet1/17
egypt(config-vdc)# allocate interface Ethernet1/18

To verify the interfaces allocation, enter  the show vdc membership command as demonstrated in this  example:

egypt(config-vdc)# show vdc membership


       Ethernet1/26      Ethernet1/28      Ethernet1/30
       Ethernet1/32      Ethernet2/2       Ethernet2/4
       Ethernet2/6       Ethernet2/8       Ethernet2/26
       Ethernet2/28      Ethernet2/30      Ethernet2/32
       Ethernet3/4       Ethernet3/5       Ethernet3/6
       Ethernet3/7       Ethernet3/8       Ethernet3/9
       Ethernet3/11      Ethernet3/12      Ethernet3/13
       Ethernet3/14      Ethernet3/15      Ethernet3/16
       Ethernet3/17      Ethernet3/18      Ethernet3/19
       Ethernet3/20      Ethernet3/21      Ethernet3/22
       Ethernet3/23      Ethernet3/24      Ethernet3/25
       Ethernet3/26      Ethernet3/27      Ethernet3/28
       Ethernet3/29      Ethernet3/30      Ethernet3/31
       Ethernet3/32      Ethernet3/33      Ethernet3/34
       Ethernet3/35      Ethernet3/36      Ethernet3/39
       Ethernet3/40      Ethernet3/41      Ethernet3/42
       Ethernet3/43      Ethernet3/44      Ethernet3/45
       Ethernet3/46      Ethernet3/47      Ethernet3/48

vdc_id: 2 vdc_name: core interfaces:


       Ethernet1/17      Ethernet1/18      Ethernet1/19
       Ethernet1/20      Ethernet1/21      Ethernet1/22
       Ethernet1/23      Ethernet1/24      Ethernet1/25
       Ethernet1/27      Ethernet1/29      Ethernet1/31
       Ethernet2/17      Ethernet2/18      Ethernet2/19
       Ethernet2/20      Ethernet2/21      Ethernet2/22
       Ethernet2/23      Ethernet2/24      Ethernet2/25
       Ethernet2/27      Ethernet2/29      Ethernet2/31
       Ethernet3/1       Ethernet3/2       Ethernet3/3
       Ethernet3/10

In addition to interfaces, other physical  resources can be allocated to an individual VDC, including IPv4 route  memory, IPv6 route memory, port-channels, and SPAN sessions. Configuring  these values prevents a single VDC from monopolizing system resources.  This example demonstrates how to accomplish this:

egypt(config)# vdc core
egypt(config-vdc)# limit-resource port-channel minimum 32 maximum  equal-to-min
egypt(config-vdc)# limit-resource u4route-mem minimum 32 maximum  equal-to-min
egypt(config-vdc)# limit-resource u6route-mem minimum 32 maximum  equal-to-min
egypt(config-vdc)# limit-resource vlan minimum 32 maximum equal-to-min
egypt(config-vdc)# limit-resource vrf minimum 32 maximum equal-to-min

Defining the VDC HA policy is also done  within the VDC configuration sub-mode. Use the ha-policy command to  define the HA policy for a VDC as demonstrated in this example:

egypt(config)# vdc core
eqypt(config-vdc)# ha-policy dual-sup bringdown

The HA policy will depend based upon the  use-case or VDC role. For example, if you have dual-supervisor modules  in the Nexus 7000 chassis or if the VDC role is development/test, the  VDC HA policy may be to just shut down the VDC. If the VDC role is for  the core and aggregation use case, then the HA policy would be  switchover.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: