cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
6232
Views
32
Helpful
18
Replies

Hardware VN-Link

visitor68
Level 4
Level 4

I’m a bit unclear in terms of policy migration in HW VN-Link,  i.e.VM FEX. A port group is a product of the vSwitch construct, correct?  If, say, a 1000v has a port profile configured with all its associated  security and vlan characteristics, that profile is translated as a port  group in vCenter. Moreover, the VM and the interface it is connected to on the 1000v are associated to that port group. When a VM is migrated from one host to another in the  same vMotion cluster, the VM will remain attached (bound) to the same  vethernet port on the 1000v. Therefore, the port group to which that vethernet is bound also remains the same and the policies follow. Simple enough.

But when one performs a HW VN-Link (HW FEX), the NIV capabilities of  Palo are leveraged. In this case, my understanding is that the hypervisor is either  bypassed altogether (VM Direct Path I/O), in which case vMotion is not  possible because the hypervisor no longer has authoritative dominion  over the VM, OR the 1000v simply acts as a pass-through that does noting  more than aggregate the traffic from the downlinks to the uplinks,  which are attached to the vNICs on the Palo. So, with the absence of a  port profile and its associated port group (no vswitch construct being  leveraged anymore), where does the VM’s policies reside?

Thanks

1 Accepted Solution

Accepted Solutions

The VEM on its own is not of any use so its free. It has to  work with a VSM.

VSM for a Nexus1000v is a VM on any ESX host or can be hosted on the Nexus 1010 appliance.

The licenses are entered on the VSM and thats how the VEM are allowed to be a part of that distributed switch.

For a Nexus 1000v bundle (VEM+VSM) you need licenses ..its a revenue model as the Nexus1000 can work on any server/adapter out there i.e it is not free.

For VN-link in h/w, the VSM is the FI. The licenses are not required as its known that it is working on UCS as VN-link in h/w is only supported on that.

i.e it comes with the hardware and the hope is you will buy more UCS

Instead of having 2 VEM's - one for Nexus1000v and one for VN-link in h/w it was chosen to keep one VEM. That VEM moves to Nexus1000v or VN-link in h/w depending on a condition (dynamic vNICs). Much simpler as a new repository doesn't need to be made for VUM etc.

Also in line with the VN-link message i.e Vn-link in software (nexus1000v) and Vn-link in h/w require the same VEM. Currently the VSM is not the same for both but paves a way for them to be clubbed together at some point in the future if need be.

Hope it makes sense..and ..don't worry abt the "stars". If the reply addressed your question, we are good.

Thanks

--Manish

View solution in original post

18 Replies 18

Jeremy Waldrop
Level 4
Level 4

It is important to understand that VN-Link in hardware is not VMDirectPath I/O. VMDirectPath I/O allows you to take a NIC and present it directly to a VM so that the VM in the case of UCS woulds see the Cisco VIC 10G adapter and you would then have to load the driver for the adapter inside the guest OS. VMDirectPath I/O does disable your ability to vMotion that VM.

VN-Link in hardware is where you have a dynamic vnic connection policy that creates up to 54 dynamic vnics on each service profile. In UCS you then create your vCenter connection, dvSwitch and port groups. That dvSwitch and port groups then show up in vCenter and you place VMs in them. Once you place VMs in a UCS managed dvSwitch each VM gets assigned a dynamic vnic and a veth interface on the 6100s. ESX nor the VM realize that this happening and the VM sees a virtualized NIC (vmxnet3) just like normal. The veth interface that a VM is assigned on the 6100 is the same regardless which ESX host it resides on.

VN-Link in hardware doesn't offer the same features as the 1000v and cannont be managed from the NX-OS command line like the 1000v.

Both the Nexus 1000v and the UCS VN-Link require the same Nexus 1000v VEM be installed on each ESX host. This can be done with either update manager or using the esxupdate command.

This document talks abt the theory behind VN-link in hardware and also how to configure it.

http://www.cisco.com/en/US/products/ps10281/products_configuration_example09186a0080b52d0d.shtml

Thanks

--Manish

manish, thanks...

Jeremy, interesting stuff...

As for Direct Path I/O, I know all that already...

So, I think I am clearer now. Even with VN-Link in hardware, a vswitch is still needed to configure the port groups/profiles and place the VMs in them. So, through UCS, a vswitch is created, port groups are configured, and VMs are bound to them. Similar to the way it is done with VN-Link in software. After that, the VM's vNIC is mapped to a vNIC on the Palo adapter and finally mapped to a vETH on the 6100. Correct?

So, lets see... a VMs vNIC is mapped to a vSwitch vETH interface. All the vETH interfaces on the vswitch are mapped to the uplink ports, much the same way it is done in the physical world. The vswitch's uplink ports are mapped to vmnics on a one-to-one basis that reside on the hypervisor (I guess), and those vmincs are mapped on a one-to-one basis to the vNICs on the Palo. UGH! :-)

Does the vswitch have to be a 1000v? Or can one use the VMware vswitch? I would think so, since the 1000v is just acting as a pass through - not doing much.

Thanks

The switch is a dvSwitch created and managed from UCSM and then pushed to vCenter via the connection that is created with the XML plugin that you export from UCSM and register in vCenter. This works just like the Nexus 1000v. The new dvswitch in vCenter is read only and then beneath that you create the port-profiles from UCSM that show up as port groups under the dvSwitch in vCenter.

It is a bit confusing until you go through the configuration one time.

Will echo Jeremy's comment on configuring it to get a better grip on the concept.

It falls into place very quickly once you have installed/configured it.

Below is an additional write up on it incase theory works better for you.

Thanks

--Manish

------------------------------------------

vSphere (ESX 4.0) introduced switching alternatives to the existing standard virtual networking switch (also termed as vSwitch).

This framework is termed as VMware vNetwork Distributed Switch (vDS) and aims to extend the networking feature set of the VMware Standard Switch, while simplifying network provisioning, monitoring, and management through an abstracted, single distributed switch representation of multiple VMware ESX and ESXi Servers in a VMware data center. The VMware vNetwork third-party vSwitch API of VMware vDS allows third party vendors to offer switching infrastructure for Virtual machines on ESX.

The Nexus 1000v is an example of such a vDS. The Nexus 1000v registers as a vDS in vSphere and has 2 components

a) Virtual Supervisor Module (VSM)

The VSM provides the management plane functionality of the switch running as a VM and is the component that runs Cisco NX-OS. It is important to note that the VSM is not in the data path.

b) Virtual Ethernet Module (VEM)

The VEM is a lightweight component that is installed on the Hypervisor and provides the switching functionality.

VN-link in hardware too is a vDS for vMware utilizing the same vDS framework as the Nexus 1000v and is available on the Unified Computing system. The VEM exists on the hypervisor providing PTS (Pass through switching) functionality while the VSM in this case is the UCS manager which provides the management plane functionality.

Unlike the Nexus 1000v, there is no local switching on the VEM and all traffic is forwarded to the Fabric Interconnect where the policy application and switching happens bringing the networking of VM’s to feature parity with networking of physical devices.

VN-link in hardware is analogous to a patch panel where there is a one to one correlation of the VM vNIC to an uplink (see attached).

As there is a one to one correlation between the VM vNIC and the uplink port, a VN-link in hardware implementation will most likely be deployed with the VIC (Palo adapter) for its ability to create multiple vNICs in hardware.

VN-link in hardware along with moving the switching functionality to the Fabric Interconnects provides the ability to do VMotion/DRS within the same cluster.

------------------------------------------

Manish:

VN-link in hardware too is a vDS for vMware utilizing the same vDS  framework as the Nexus 1000v and is available on the Unified Computing  system. The VEM exists on the hypervisor providing PTS (Pass through  switching) functionality while the VSM in this case is the UCS manager  which provides the management plane functionality.

Unlike  the Nexus 1000v, there is no local switching on the VEM and all traffic  is forwarded to the Fabric Interconnect where the policy application  and switching happens bringing the networking of VM’s to feature parity  with networking of physical devices.

VN-link in hardware is  analogous to a patch panel where there is a one to one correlation of  the VM vNIC to an uplink (see attached).

As  there is a one to one correlation between the VM vNIC and the uplink  port, a VN-link in hardware implementation will most likely be deployed  with the VIC (Palo adapter) for its ability to create multiple vNICs in  hardware.

VN-link  in hardware along with moving the switching functionality to the Fabric  Interconnects provides the ability to do VMotion/DRS within the same  cluster.

This post is the silver bullet, my friend! Beautiful! A concise overview and an apples-to-apples comparison between VN-Link in software and VN-link in hardware. Its exactly the kind of information I was looking for. I wasnt sure of the VM vNIC to uplink mapping and I wasn't sure if indeed the 1000v was needed.

So, here goes, why is the 1000v necessary for VN-link in hardware if its only acting as a pass-through? Its basically acting as a software version of a pass-through blade module and forwarding thr traffic to a mapped Palo port and up to the 6100.

I can take that question one step further and ask why any vswitch is necessary for VN-link in hardware.

I think there are 2 questions in here ..

a) Why is vSwitch required for VN-link in hardware

b) Why is any VEM component required for VN-link in hardware

a) vSwitch is not required. If you look at the config guide, the vSwitch is used initially for the ESX host connectivity.

When you install ESX host, thats what it comes up with and you don't have a choice. The IP address during install you specify is for the Service Console which is a port group too.

Once VN-link in hardware is setup, the vSwitch does not exist any more.

It *could* co-exist with VN-link in hardware if you had enough uplinks to be given to it but is not recommended.

Same way you could have Nexus1000v, DVS and vSwitch coexist on the same host.

b) I believe this is your main question.

Lets look at the way you would configure VMdirectpath in UCS.

You will have to start off by creating static vNICs in the SP (hence QoS policy, VLAN, MAC etc is defined).

You will mark those PCI devices for pass through in VCenter, go to edit settings for a VM and pass this PCI device.

The driver is loaded in the guest (the device could be a ethernet vNIC or a graphics card) and you assign an IP to it belonging to the VLAN defined in the SP for this vNIC.

You will have to make sure you don't give this PCI device to any other guest and you lose VMotion etc.

There is no concept of port groups which define what VLAN/MAC/QoS policy etc a particular vNIC can belong to or be re-provisioned on the fly.

Pretty painful too as PCI devices with addresses show up in the GUI in VCenter.

Now, with VN-link in hw you need the VEM component on the host to bring the "port-group" concept for vNICs.

What the VEM does is maps the port groups to the dynamic vNICs.

Lets say you have a VM (Guest A) and should belong to VLAN 10 (portgroup VM-data).

You will go to FI (which is the VSM for Vn-link in hw), create port group VM-data and other port groups if applicable and that is pushed to VCenter.

You will go to VM(Guest A), edit settings, select VM-data as the port group for that VM (just like the way you would do for vSwitch/1000v etc).

When the VM comes up on a ESX host, the VEM will do the *mapping* i.e take a dynamic vNIC and put it in the correct VLAN (as dictated by the port group definition, QoS Policy etc) and pass the emulated vNIC to the VM. The FI is also configured in the flow as that is where the switchport is.

If this VM(Guest A) goes away and Guest B comes up, the VEM might take the same dynamic vNIC or any other and now impose another port-group (port group Backup - VLAN 20) on it.

So you need a layer which does the  mapping and that is what the VEM on the host does.

It also maintains state information which is used for VMotion as data passes through it.

VMdirectpath currently doesn't support VMotion.

The VEM is very lightweight component (like a patch panel). You do get performance improvement (10-15% vs a vSwitch) and you brought networking out of the server hence troubleshooting will happen on the switch outside instead of a local switching component in every ESX host.

How you would troubleshoot  linux/windows running on a  bare metal vs a VM in this case will be the same.

Next question which I see coming -

VMdirectpath with VMotion is going to come out soon. What will that look like?

When VMdirectpath with VMotion is supported, you still *will* have a VEM like component. It will do the "mapping" explained above i.e a port-group to vNIC and then the data will go directly to the vNIC (bypassing the VEM completely). When the VMotion is trigerred, the VEM component will freeze the VM, copy the memory state going through the vNIC and do the VMotion. So VEM like component will still exist to setup/tear down but for normal data flow, be out of the picture.

Hope it makes sense.

Thanks

--Manish

Manish,

I now understand - from a technical perspective - why Cisco's VN-link in hardware requires the 1000v. It's engineered to work that way and you have given some specifics.

But back to question a), I meant vDS, not vSwitch. So, to re-ask the same question, is the vDS that already comes with VMware capable of working with the 6100 to provide hardware-based vn-link?Or does someone have to buy the 1000v?

No, VN-link won't work with the existing vswitch or VMware DVS i.e it needs its own host component (VEM).

Just to clarify something here -

The VEM (host component) in case of VN-link in h/w is the *same* VEM module (software bits) as the Nexus1000v.

The VEM moves to Nexus 1000v (local switching) or VN-link in hardware (pass through or patch panel like) mode depending on whether it sees dynamic vNICs existing on the host.

For VN-link in hardware, you do not have to "buy" Nexus1000v.

Just install the VEM module (using VUM or manually) on a ESX server with Palo (dynamic vNICs defined)  are you are good to go.

VN-link in h/w only exists for UCS right now for VIC (Palo) and is free as I mentioned above.

Nexus 1000v as you know you can install on any host out there with whichever network adapter.

You do need Enterprise Plus for ESX for VN-link in hw it is a vDS and for vDS functionality you need enterprise plus on the host.

Thanks

--Manish

Manish....meant to give you 5 stars - mis-click.

Im confused about one thing: How is it that you dont have to buy the Nexus 1000 but you have to install its VEM...? Is the VEM given for free?

[EDIT] By the way, you are an encyclopedia, my friend! [EDIT]

Thanks

The VEM on its own is not of any use so its free. It has to  work with a VSM.

VSM for a Nexus1000v is a VM on any ESX host or can be hosted on the Nexus 1010 appliance.

The licenses are entered on the VSM and thats how the VEM are allowed to be a part of that distributed switch.

For a Nexus 1000v bundle (VEM+VSM) you need licenses ..its a revenue model as the Nexus1000 can work on any server/adapter out there i.e it is not free.

For VN-link in h/w, the VSM is the FI. The licenses are not required as its known that it is working on UCS as VN-link in h/w is only supported on that.

i.e it comes with the hardware and the hope is you will buy more UCS

Instead of having 2 VEM's - one for Nexus1000v and one for VN-link in h/w it was chosen to keep one VEM. That VEM moves to Nexus1000v or VN-link in h/w depending on a condition (dynamic vNICs). Much simpler as a new repository doesn't need to be made for VUM etc.

Also in line with the VN-link message i.e Vn-link in software (nexus1000v) and Vn-link in h/w require the same VEM. Currently the VSM is not the same for both but paves a way for them to be clubbed together at some point in the future if need be.

Hope it makes sense..and ..don't worry abt the "stars". If the reply addressed your question, we are good.

Thanks

--Manish

Manish, thank you very much for your patience in answering all my questions. I know I asked a lot of them. :-)

Manish.- excellent post..

I had a basic question..

traffic between two VM's on the same ESX server (on same VLAN) wouldnt be switched through the uplink switch right ? It will be directly switched by VEM/ESX locally through its kernel ?

Can you confirm me if my understanding is right on the following scenarios:

1) Traffic between 2 hosts in same ESX server , same VLAN - switched locally by ESX (doesnt go upstream)

2) Traffic between 2 hosts in diff ESX servers, same VLAN - switched to uplink (VM-DATA) and layer 2 forwarding to other ESX

3) Traffic between 2 hosts in same ESX, different VLAN - forwarded to uplink since layer 3 isnt defined in VSM.. In my case it goes to Dist1 switch and comes back

Right ?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: