cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1810
Views
0
Helpful
5
Replies

Nexus 1000v - Slow DHCP?

dpaluszek
Level 1
Level 1

All:

One of my customers is having a rather strange issue that I'm a little intrigued by. I have opened a ticket with Cisco TAC already and we just can't seem to put our finger on it. Here's what's happening:

- Customer is in a Virtual Desktop environment using Windows XP SP3 as the Guest OS. When users boot up their VM, they seem to have a prolonged delay on getting an IP on their assigned VM. Sometimes, tech support staff has to force a renew in the VM so users can log in. From what I heard, there could be a 5 to 7 minute delay at certain times, however, I could not produce this behavior in my testing.

- Doing some testing, there seems to be about a 24 second delay using the Nexus dvSwitch compared to a standard vSphere vSwitch on the same exact VLAN.

- Physical desktops do not experience any issues.

Information:

- vSphere Update 1 latest build

- Five node vSphere cluster - behavior resides on all hosts

- Nexus 4.04SV1(2) (I can upgrade but I couldn't find any information that resolves this type of issue)

- Two uplinks, system and vm, config will be posted below.

- Using Blue Cat Networks DHCP/DNS Virtual Appliances (moved out of the production environment on isolated hosts, no change in behavior) - They have two DHCP/DNS appliances with a single management server.

Packet Captures:

- We ran a packet capture on a VM and analyzed the traffic. For some reason, we see an EXACT 24 second delay after packet 8. It seems on the 9th packet, we receive two DHCP inform packets followed by two ACKs, then two informs, then two ACKs. This doesn't seem to follow the DORA DHCP methodology.

- The physical desktop capture did not show any Inform packets and completed the DHCP process in 10 packets (compared to 20 for the Nexus VM capture - probably due to the helper config on the route).

Config on Nexus uplinks:

CH-n1000v# sho run port-profile system-uplink

version 4.0(4)SV1(2) port-profile type ethernet system-uplink

  vmware port-group

  switchport mode trunk

  switchport trunk allowed vlan 48-49

  channel-group auto mode on

  no shutdown

  system vlan 48-49

  state enabled

port-profile type ethernet vm-uplink

  vmware port-group

  switchport mode trunk

  switchport trunk allowed vlan 0-47,50-4096

  channel-group auto mode on

  no shutdown

  state enabled

Config on physical uplink switch (single 6500)

VM-UPLINK TRUNK:

Jail-6500#show run interface g4/38

interface GigabitEthernet4/38

description ### VMRSC4 - vm-uplink ###

no ip address

speed 1000

duplex full

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 2-47,50-4094 

switchport mode trunk  channel-group 13 mode on

end

SYSTEM-UPLINK TRUNK:

Jail-6500#show run interface g4/40

interface GigabitEthernet4/40

description ### VMRSC4 - system-uplink ### 

no ip address 

speed 1000 

duplex full 

switchport 

switchport trunk encapsulation dot1q 

switchport trunk allowed vlan 48,49 

switchport mode trunk  channel-group 12 mode on

end

I am not seeing any errors on the Nexus logs that pertain to any kind of issue.

Any thoughts?

5 Replies 5

ryan.lambert
Level 1
Level 1

I'm sort of confused, since your ethernet port-profiles attached to your physical adapters aren't what I would expect in a port-channel configuration. I'd expect to see both uplinks carrying the same VLANs/using same ethernet port-profile from the 1000v. Is there any actual port-channeling going on, or are these just standalone uplinks, exactly 2 in total as your message states, with channel-group configured and not doing anything?

If you have a need to split certain traffic onto a certain uplink manually due to bandwidth/IO constraints, and that was the reasoning behind the decision, I'd possibly suggest running vPC with mac-pinning and making these redundant (or alternatively, adding another pair of uplinks). You do have the flexibility with mac-pinning to pin specific VLANs to one uplink or the other (unless it is iSCSI multipath enabled), so it gives you some control over your traffic flows. It is not necessary to configure a port-channel on the physical switch with mac-pinning -- just a matching pair of trunks.

I know that doesn't directly address your symptom, but if you don't need a lot of the channel-group configuration that is there, cleaning it up a bit may/may not help some.    

Sorry, I guess I should have been a little more clear.

- The system-uplink port-profile has two channel group members for each vSphere host. The vm-uplink has three members within its port channel on each vSphere host.

- system-uplink only runs our control and packet VLANs. When we were allowing all VLANs to run through the vm-uplink, we had issues with port flapping while using mac-pinning (which is not used anymore). Hence we excluded 48 and 49 (control and packet) out of the VLAN list on the vm-uplink.

- I did go through the DHCP snooping article and enabled it. No difference on a test.

Got it... that makes sense.

Question - when you had mac-pinning enabled, did you have channel-group also configured on your physical switches? I have heard of your flapping symptom when that condition is true.

Also, are you able to configure portfast trunk on your uplinks? Since the 1000v does not participate in STP and has its own loop-prevention logic, no reason to run active STP on your 1KV connected uplinks.

We did not, the reason we had the flapping was due to the packet and control VLAN running through the vm-uplink at certain times.

Yes, we can configure STP on our uplinks, but it doesn't seem to make a difference.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: