cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
22115
Views
23
Helpful
0
Comments
Marwan ALshawi
VIP Alumni
VIP Alumni

Overview

Server Virtualization (aka hardware virtualization) is the most common application being used for Hardware’s virtualization where hypervisor software enables creating a virtual machine (VM) that emulates a physical computer by creating different OS environment that is logically separated from the hosting system. A single physical system/hardware can be used to create several VMs that can run different OSs independently and at the same time.

Cisco’s Hardware for Data center systems virtualization (Unified Computing system UCS ) can provide a very scalable, simplified and manageable functionalities with market biggest memory support with Cisco Extended Memory Technology.

Although Cisco UCS is a scalable, manageable and simplifying operations rather than make it harder such as adding or moving servers, installing new VMs ..etc within minutes only, but the actual Design of Cisco UCS system requires some knowledge/expertise in different areas such as systems virtualization using VMware, Storage (SAN) and L2 and L3 Networking. This article will focus on the networking part from high level point of view describing methods and mechanisms to setup and design of the virtualized network connectivity with the physical network world and compare different approaches and methodologies using Cisco UCS and Nexus 1000v Switch. In this article there will be no deep dive in any of the discussed areas, but a link of each discussed concept will be provided that can be referred to for more details and technical understanding which will help to direct the designer to the right direction first before going deep into the details.

Note:

  • This Article is not a best practice design document or an official cisco Guide
  • Assumptions: the reader has some basic understanding of Vmware VI ( such as EXi and Vcenter ), Cisco L2 switching and basic Routing and QoS

Technical background of Virtual networking

Designing and building a virtual switching LAN within the virtualized environment such as VMware has some similarities to the physical switching but there are some technical differences in the operations and the used terminologies:

  • vSwitch : Virtual Switch is a software switch that switch VMs traffic and management traffic within the same ESXi host
  • port group: a logical object that provide different functions within the vSwitch such as VM port group or VMKernal Port group for Vmotioning for example
  • VM port Group: a group of virtual ports that has common configurations such as Vlan ID
  • NIC teaming: the aggregation of two or more physical NICs into one logical NIC ( link aggregation )
  • VMKernal port: a specialized vSwitch port to provide and allow Vmotion, iSCSI, NFS, NAS or FT

By using independent vswitch per ESXi host VMs connected to the same vSwitch that reside in the same ESXi host does not need a physical switch to forward the traffic between them however because communications between two vswitch not supported a physical switch is required to forward the traffic between VMs on different vSwtich even if they are on the same ESXi host by using external physical switch, also a physical NIC is required to be defined as a uplink to that vswitch and used for sending traffic out to the physical switch to other vswitchs or for inter-VLAN routing for example.

basicVMNICs.jpgbasicVMNICs2.jpgn1k01.jpg

Now the question that arise what are the technical differences between the vSwitch and the physical switch :

  • VSwtich does not support DTP
  • Vswitch cannot connect or communicate with other vswitch in normal cases and this will eliminate L2 looping and because of that vswitch does not have Spanning Tree STP
  • Traffic received by vswitch one uplink will not be forwarded out via another uplink on the same vswtich and this is another reason why STP not required
  • A vswitch knows about the Connected VMs MAC addresses so no need to learn it from the Network

Distributed Virtual Switches

VMWare vDS

With VMWare virtual distributed switch the switching fabric reside virtually across multiple VMware ESXi hosts , and this feature enable VMs to migrate smoothly from one VMware Vsphere server to other, also the vDS provide centralized management via its integration with VMware Vcenter.

vDS.jpg

Cisco Nexus 1000v

Cisco has developed the first third party vDS switch that works with VMWare virtual infrastructure. The Cisco Nexus 1000v series switch to some extent provides same features, managment and functionalities to network Admins they used to have with physical switches but the switch now delivered as a software rather than hardware

Note:

There is a new Cisco Nexus 1010v delivered in appliance, but the actual vSwitch concept is still the same software and the hardware just for offloading the control plane processing form the VI servers

n1k02.jpg

Nexus 1000v (aka N1K) has several features that are currently only supported by N1Kv such as:

  • IGMPv3
  • Virtual port Channel vPC
  • Load balancing algorithms source-destination MAC and IP
  • QoS: L3 DSCP and L2 CoS
  • Security: ACls, Dynamic ARP inspection, Ip source Guard, DHCP Snooping
  • Network Management: ERSPAN, Netflow v9, NX-OS XML API

Nexus 1000v Architecture overview in a High Level

As mentioned earlier there will be no deep dive in any specific topic but only high level discussion of the concepts. Understanding the architecture of Nexus 1000v is crucial in the design of virtualized network environment with Cisco UCS and VMware, now what are the component of Nexus 1000v ? and how the overall architecture of this software looks like in a virtualized environment ?

To answer these questions we need first to understand what are the components of the N1K:

VSM: Virtual Supervisor Module

Just like a blade switch supervisor module such as Cisco 6500 where the supervisor module is the brain of the switch doing control plane, management functions and integration with VMware Vcenter

VEM: Virtual Ethernet Module

And this is like the Ethernet module of the physical switch but the difference here is in each ESXi host there will be a VEM that communicate and controlled by the VSM , VEM replaces the ESXi Vswitch, provide each VM with a dedicated port

n1k04.jpg

With physical switch there is a backplane cross bar normally used for communications between the modules within this virtualized switching there is no backplane crossbar but it uses same backplane protocol in N7K and MDS switches called AIPC which is used for communications between the VEM and VSM these communications can be done either in:

  • L2: using specific VLANs for control and packets
  • L3: L3 control capability where each VEM use the VMKernal interface and tunnelling to tunnel control traffic to the VSM

The most commonly used approach is the L2 connectivity where three type of VLANs have to be created Management, Control and data VLANs:

n1kl2.jpg

Network Connectivity design methods with Cisco UCS

Earlier in this document several concepts and methodologies of virtualized switching and networking have been described with more focus on Cisco Nexus 1000v vDS switch. Before we discuss how this virtual network world can connect to the physical Data center Network world we need to understand in a high level what Cisco unified computing system UCS can offer to this virtualized environment from networking point of view as this document focus on the network part only.

Cisco UCS offer many features, functions and capabilities that boost the virtualized infrastructures deployments and make it more scalable and manageable from the physical layer ( power and cabling with the unified I/O ) up to the application layer UCSM and Layer 8 user experience

For more information about Cisco UCS please refer to the bellow link:

www.cisco.com/go/ucs

In a high level cisco UCS component are as shown below:

ucs_components.jpg

For more details see the bellow link

https://supportforums.cisco.com/docs/DOC-5945

The bellow figure shows a typical Physical connectivity of Cisco UCS platform:

ucs-physical.png

Although this physical connectivity look standard but based on the NIC card used in the UCS platform the vNetwrking logical connectivity will be different in conjunction with Cisco N1K vDS as a virtualized switch . Cisco UCS offer different type of NIC that can be used and each has different capabilities which can be selected based on the network design requirements:

Optimized for virtualization, up to 58 dynamic, programmable Ethernet and Fibre Channel interfaces, Hardware- and/or software-enabled Cisco VN-Link capability

How the M81KR virtualized adapter integrates with rest of the UCS system and what are its benefits

https://supportforums.cisco.com/docs/DOC-8953

Total of 4 fixed interfaces 2 Ethernet and 2 Fibre Channel, Software-enabled Cisco VN-Link capability

Ideal for efficient, high-performance Ethernet, Total of 2 fixed Ethernet interfaces, Software-enabled Cisco VN-Link capability

The bellow figure shows the logical architecture of the cisco virtualized interface VIC:

vic01.jpg

as mentioned above using different NIC can lead to different vNetorking approach in general there are two main approaches forwarding/siwthcing in hardware or in software where the hardware one must utilize the Cisco VIC to provide virtual NICs (vNICs) per VM

the bellow figure is taken from Cisco Live 2011 ( end to end Data Center virtulization ) which describe and summarise the whole concept:

vnics.jpg

As shown in the figure above the VN-link in hardware can provide better performance and management as the management will be via the UCSM and vCenter as the VSM will not be part of the solution and that mean any aggregation or NIC teaming has to managed and aligned between VMware Vcenter and UCSM

While the software vn-link is just uses the entire concpets of VSM with VEM described earler in this document and al of the Networking configurations and Admin is done via N1K VMS CLI including the pining and links aggregation.

For more details please see the bellow links:

Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

Cisco Nexus 1000V Series Switches Deployment Guide

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/guide_c07-556626.html

Cisco N1K connectivity Options

As described above if the N1K going to be used with VSM for vNetwokring ( vNetwiorking in software ) the all of the configurations of the uplinks, link aggregation and pining have to be done using VSM CLI. Just like any physical access switch the design and configurations are dependant on the aggregation layer switchs and how these switches have been designed such as aggregated/clustered with some sort of virtualizations concepts such as vPC or VSS as this will affect the way of designing N1K uplinks

There are different way that N1K VEM can connect to the upstream switches:

  • vPC Host Mode ( MAC-Pining )

by using mac-pining with links sub-groups VMs can be pined automatically on the VM boot up to a formed subgroup if link failure occur repining and failover to the other Woking links will happen automatically this approach is useful when the upstream switches are not clustered using vPC or VSS for example

config guide:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/interface/configuration/guide/n1000v_if_5portchannel.html#wp1302690

  • static Pining

Static pining provide the ability to select specific VMs/traffic type to use Specific uplinks selectively, for example vMotion can be enforced to use a specific uplink and this design still preserve the concept of active/standby during failover situations

Config guide:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/interface/configuration/guide/n1000v_if_5portchannel.html#wp1302673

  • Port Channel (LACP)

LACP is control mechanism of port-channelling that used in link aggregations and it provide the ability of bundling and utilization more than uplink for more traffic and high availability and better failover/convergence time this method is recommended when the upstream switches are aggregated/clustered using vPC or VSS

lacp.jpg

Note:

LACP is a control plane protocol that runs on the supervisor of a switch ( VSM in N1K )

In the case VSM is disconnected or down the VEM will be operating in a headless mode without the ability to utilise the control plane LACP function

LACP Offload solved this problem by offloading the operation of the LACP protocol from the VSM to the VEMs

Cisco Nexus1000V Release Notes, Release 4.2(1) SV1(4):

http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4/release/notes/n1000v_rn.html#wp72803

Cisco UCS and Quality of Service

With the modern and converged networks and Data centers that carry many real time sensitive data such as voice and video in conjunction with some business critical applications that require some bandwidth grantee QoS become a very important component that need to be considered when designing any networking solution not to mention cisco UCS uses fiber channel over Ethernet which require the FCoE to be prioritized and lossless and using Cisoc Unified communications on UCS ( aka UC on UCS ) make QoS more important and deterministic element in the Data Center Design.

Cisco UCS uses Data Center Ethernet (DCE) to handle all traffic inside a Cisco UCS system. System classes determine how the DCE bandwidth in these virtual lanes is allocated within UCS. Cisco UCS uses only L2 COS and these values range from 0 to 7 (where 0 is the lowest value and 6 is highest value). COS 7 is reserved for internal traffic

QoS in UCS can be differ in the way that it can be designed with regard to the network design approach and NIC used in the UCS system ( QoS requirements has to be aligned with the above section of Network connectivity )

For example using the virtualized NIC vNIC has several benefits such as ability to have multiple virtual interfaces and each interface assigned to different QoS policy that being allocated a COS value and treated differently within the UCS system based on the QoS requirements, so we can have up 8 queues in theory but normally there is one queue/COS reserved for FCoE ( by default COS 3 ) and other one used for management, and the reset can be allocated as per the design requirements.

However this design approach is not always the best, for example UC on UCS better work with the use of Nexus 1000v QoS capabilities to mark the traffic on the VM port for L3 DSCP value mapping to L2 COS value as the UCS dose not map L3 DSCP to L2 COS and Cisco UC applications mark the traffic ( signalling and media with DSCP only ), cisco N1K here will be working just like any L2 access switch and can perform the L3 DSCP mapping to L2 COS and UCS can priorities traffic based on this COS value and pre-configured policies

QoS Design Considerations for Virtual UC with UCS

http://docwiki.cisco.com/wiki/QoS_Design_Considerations_for_Virtual_UC_with_UCS

References:

Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html

Cisco Live 2011, Deployment of VN-Link with the Nexus 1000v.

Cisco Live 2011, End-to-End Data Centre Virtualisation.

Lowe S., Mastering VMware vSphere 4 Sep 2009.

Thanks and Regards,

Marwan Alshawi

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: