cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8612
Views
0
Helpful
22
Replies

1000v VSM doesn't push config changes to vCenter

eric.bauer
Level 1
Level 1

Team,

My apologies if there are similar posts on this forum but I'm in a bit of a time crunch.

I have a situation where a client's VSM is no longer pushing any config changes to vCenter.  The following snippet from the config states 22 VLANs configured for the VM traffic uplink:

port-profile type ethernet 1000v-vm-traffic-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 32,75,120-124,253,255,259-261,263,266-268,270,74
0,745,785,797,888
  channel-group auto mode on sub-group cdp
  no shutdown
  description 1000v virtual machine traffic uplink
  state enabled

However, the vCenter vDS configuration is missing 6 of the VLANs (120-124, 270):

Capture.JPG

The VSM is connected to vCenter (sho svs conn), it can ping vCenter, and I've validated that the XML plugin matches the VSM extension key.  I've tried removing the VLANs in question and re-creating them but it did not spawn any updates to vCenter.  The control and packet VLANs are configured as system VLANs in the port profiles:

port-profile type ethernet 1000v-system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 3321-3322
  channel-group auto mode on sub-group cdp
  no shutdown
  system vlan 3321-3322
  description 1000v packet control system uplink
  state enabled

port-profile type vethernet 1000v-control
  vmware port-group
  switchport mode access
  switchport access vlan 3322
  no shutdown
  system vlan 3322
  state enabled
port-profile type vethernet 1000v-packet
  vmware port-group
  switchport mode access
  switchport access vlan 3321
  no shutdown
  system vlan 3321
  state enabled

This was working at initial install (hence the VLANs present in vCenter uplinks) but has since stopped sync'ing.  Any help would be greatly appreciated and let me know if I can provide any additional information.

Thanks in advance!

virtualEB

1 Accepted Solution

Accepted Solutions

One of your Uplink Port Profiles "1000v-system-uplink" is not auto-creating the port channel.

You can see that you have a Bundle LTL 305 for the VM-uplink profiles, but you're missing the one for your 1000v-system-uplink.

I see you have 4 NICS (2 for System uplink, 2 for VM uplink).  I assume there are at least two switches invovled.  CDP relies on timers, and can sometimes cause delays with PC's coming up.

As a test, on your VSM, create a "new" duplicate port-profile for your system uplink, but rather than "channel-group auto mode on sub-group cdp" use "channel-group auto mode on mac-pinning".  Once created, migrate one of  your VEMs to it.

Once that's been done, re-collect the following outputs for the modified module:

module vem x execute vemcmd show port

I suspect the issue is an underlying control vlan communication issue between your VEMs, which handles programming.

Robert

View solution in original post

22 Replies 22

Robert Burns
Cisco Employee
Cisco Employee

1. Can you paste your full running config

2.  Do you have any VMs using the "missing" VLANs?

Regards,

Robert

Thanks for the reply, Robert.

Full running config is attached.  There are four VMs attached to one of the missing VLANs.  Those VMs cannot ping the gateway but the VSM can.

Attachment failed - pasted config

** Removed

Don't see anything attached.

If you create a "test" vEthernet Port Profile, does it push to VC?

port-profile type vethernet Test

switchport mode access

switchport access vlan 1

no shut

state enabled

vmware port-group

Also can you paste the following output:

show sys internal mts buffers

Robert

Well, I guess it is syncing because that Test PP did show up.  I removed one of the VLANs in question earlier today (VLAN 120) in an attempt to get the uplink PP to re-sync.  I'll try removing/re-adding one of the VLAN vEth PPs...

mts buffers:

MTS buffers in use = 10

Eric,

Also confirm the missing VLANs exist on all the upstream switches.

Robert

We confirmed that with the customer today. We also tested ping to the VLAN gateways from VSM which was successful.

Find out which module running one of the affected VMs:

show int virtual

Then execute the following for that module #:

module vem x execute vemcmd show vlan

module vem x execute vemcmd show port

module vem x execute vemcmd show trunk

Robert

Module 22 and the vemcmd show port reports some VLANs blocked.  I was curious earlier today when I ran show vlan and there were no port-channel interfaces attached to VLAN 120.  Why would a VLAN be blocked and how do you unblock it?

n1000v-200paul(config)# module vem 22 execute vemcmd show vlan
BD 1, vdc 1, vlan 1, 1 ports
Portlist:
    305

BD 32, vdc 1, vlan 32, 6 ports
Portlist:
     21  vmnic4
     22  vmnic5
     49  SP-APPSVC01.eth1
     50  SP-WFEND1.eth1
     52  SP-WFEND2.eth1
    305

BD 75, vdc 1, vlan 75, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 120, vdc 1, vlan 120, 2 ports
Portlist:
     51  SP-WFEND1.eth0
     53  SP-WFEND2.eth0

BD 121, vdc 1, vlan 121, 0 ports
Portlist:
BD 122, vdc 1, vlan 122, 0 ports
Portlist:
BD 123, vdc 1, vlan 123, 0 ports
Portlist:
BD 124, vdc 1, vlan 124, 0 ports
Portlist:
BD 253, vdc 1, vlan 253, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 255, vdc 1, vlan 255, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 259, vdc 1, vlan 259, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 260, vdc 1, vlan 260, 0 ports
Portlist:
BD 261, vdc 1, vlan 261, 0 ports
Portlist:
BD 263, vdc 1, vlan 263, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 266, vdc 1, vlan 266, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 267, vdc 1, vlan 267, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 268, vdc 1, vlan 268, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 270, vdc 1, vlan 270, 0 ports
Portlist:
BD 740, vdc 1, vlan 740, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 745, vdc 1, vlan 745, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 785, vdc 1, vlan 785, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 797, vdc 1, vlan 797, 3 ports
Portlist:
     21  vmnic4
     22  vmnic5
    305

BD 888, vdc 1, vlan 888, 0 ports
Portlist:
BD 3321, vdc 1, vlan 3321, 3 ports
Portlist:
     12
     19  vmnic2
     20  vmnic3

BD 3322, vdc 1, vlan 3322, 3 ports
Portlist:
     10
     19  vmnic2
     20  vmnic3

BD 3968, vdc 1, vlan 3968, 3 ports
Portlist:
      1  inband
      5  inband port security
     11

BD 3969, vdc 1, vlan 3969, 2 ports
Portlist:
      8
      9

BD 3970, vdc 1, vlan 3970, 0 ports
Portlist:
BD 3971, vdc 1, vlan 3971, 2 ports
Portlist:
     14
     15

n1000v-200paul(config)# module vem 22 execute vemcmd show port
  LTL   VSM Port  Admin Link  State  PC-LTL  SGID  Vem Port
   19    Eth22/3     UP   UP    F/B*      0        vmnic2
   20    Eth22/4     UP   UP    F/B*      0        vmnic3
   21    Eth22/5     UP   UP    F/B*    305     1  vmnic4
   22    Eth22/6     UP   UP    F/B*    305     0  vmnic5
   49      Veth1     UP   UP    FWD       0     0  SP-APPSVC01.eth1
   50      Veth9     UP   UP    FWD       0     0  SP-WFEND1.eth1
   51     Veth17     UP   UP    FWD       0        SP-WFEND1.eth0
   52      Veth2     UP   UP    FWD       0     1  SP-WFEND2.eth1
   53     Veth13     UP   UP    FWD       0        SP-WFEND2.eth0
  305       Po39     UP   UP    F/B*      0

* F/B: Port is BLOCKED on some of the vlans.
Please run "vemcmd show port vlans" to see the details.

n1000v-200paul(config)# module vem 22 execute vemcmd show port vlans
                        Native  VLAN   Allowed
  LTL   VSM Port  Mode  VLAN    State  Vlans
   19    Eth22/3   T        1   FWD    3321-3322
   20    Eth22/4   T        1   FWD    3321-3322
   21    Eth22/5   T        1   FWD    32,75,253,255,259,263,266-268,740,745,785,797
   22    Eth22/6   T        1   FWD    32,75,253,255,259,263,266-268,740,745,785,797
   49      Veth1   A       32   FWD    32
   50      Veth9   A       32   FWD    32
   51     Veth17   A      120   FWD    120
   52      Veth2   A       32   FWD    32
   53     Veth13   A      120   FWD    120
  305       Po39   T        1   FWD    32,75,253,255,259,263,266-268,740,745,785,797

n1000v-200paul(config)# module vem 22 execute vemcmd show trunk
Trunk port 6 native_vlan 1 CBL 0

Trunk port 16 native_vlan 1 CBL 1
vlan(1) cbl 1, vlan(32) cbl 1, vlan(75) cbl 1, vlan(120) cbl 1, vlan(121) cbl 1, vlan(122) cbl 1, vlan(123) cbl 1, vlan(124) cbl 1, vlan(253) cbl 1, vlan(255) cbl 1, vlan(259) cbl 1, vlan(260) cbl 1, vlan(261) cbl 1, vlan(263) cbl 1, vlan(266) cbl 1, vlan(267) cbl 1, vlan(268) cbl 1, vlan(270) cbl 1, vlan(740) cbl 1, vlan(745) cbl 1, vlan(785) cbl 1, vlan(797) cbl 1, vlan(888) cbl 1, vlan(3321) cbl 1, vlan(3322) cbl 1, vlan(3968) cbl 1, vlan(3969) cbl 1, vlan(3970) cbl 1, vlan(3971) cbl 1,
Trunk port 19 native_vlan 1 CBL 0
vlan(3321) cbl 1, vlan(3322) cbl 1,
Trunk port 20 native_vlan 1 CBL 0
vlan(3321) cbl 1, vlan(3322) cbl 1,
Trunk port 21 native_vlan 1 CBL 0
vlan(32) cbl 1, vlan(75) cbl 1, vlan(253) cbl 1, vlan(255) cbl 1, vlan(259) cbl 1, vlan(263) cbl 1, vlan(266) cbl 1, vlan(267) cbl 1, vlan(268) cbl 1, vlan(740) cbl 1, vlan(745) cbl 1, vlan(785) cbl 1, vlan(797) cbl 1,
Trunk port 22 native_vlan 1 CBL 0
vlan(32) cbl 1, vlan(75) cbl 1, vlan(253) cbl 1, vlan(255) cbl 1, vlan(259) cbl 1, vlan(263) cbl 1, vlan(266) cbl 1, vlan(267) cbl 1, vlan(268) cbl 1, vlan(740) cbl 1, vlan(745) cbl 1, vlan(785) cbl 1, vlan(797) cbl 1,
Trunk port 305 native_vlan 1 CBL 0
vlan(32) cbl 1, vlan(75) cbl 1, vlan(253) cbl 1, vlan(255) cbl 1, vlan(259) cbl 1, vlan(263) cbl 1, vlan(266) cbl 1, vlan(267) cbl 1, vlan(268) cbl 1, vlan(740) cbl 1, vlan(745) cbl 1, vlan(785) cbl 1, vlan(797) cbl 1,

One of your Uplink Port Profiles "1000v-system-uplink" is not auto-creating the port channel.

You can see that you have a Bundle LTL 305 for the VM-uplink profiles, but you're missing the one for your 1000v-system-uplink.

I see you have 4 NICS (2 for System uplink, 2 for VM uplink).  I assume there are at least two switches invovled.  CDP relies on timers, and can sometimes cause delays with PC's coming up.

As a test, on your VSM, create a "new" duplicate port-profile for your system uplink, but rather than "channel-group auto mode on sub-group cdp" use "channel-group auto mode on mac-pinning".  Once created, migrate one of  your VEMs to it.

Once that's been done, re-collect the following outputs for the modified module:

module vem x execute vemcmd show port

I suspect the issue is an underlying control vlan communication issue between your VEMs, which handles programming.

Robert

Thanks, Robert.  Couple questions:

1.  What would the impact be if I change the current system uplink to use mac-pinning on the channel-group (rather than creating a new test one)?

2.  Should mac-pinning also be applied to the vm traffic uplink?

The reason I ask is that this system is somewhat productionalized and will take some coordination with the customer to use one of the hosts for VEM migration.

Anwsers inline.

eric.bauer@inxi.com

Thanks, Robert.  Couple questions:

1.  What would the impact be if I change the current system uplink to use mac-pinning on the channel-group (rather than creating a new test one)?

-You can do this, but you have the potential to blackhole traffic momemtarily as the two links would both be forwarding, with no host side port channeling to hash outgoing traffic correctly.  This may cause duplicate packets.  The much safer approach is to create a duplicate port profile, and just changing to mac-pinning.  In this way you can change over each host one by one, and avoid any "global" impact to all your VEMs.  I'd suggest you migrate VMs off your host, change to the newly created PP, and then verify the Bundle LTL is created before migrating VMs back to this host for testing.  I've seen people try to "quickly swap" channeling command on their "used" port profile and then run into issues if the commands are not applied swiftly.  Lose connectivity to your Control VLAN and you lose the ability to manage your host. 

2.  Should mac-pinning also be applied to the vm traffic uplink?

Personally I've seen much better results with MAC pinning.  It's simple, easy to configure, and requires no special upstream conifg or CDP capabilities nor does it rely on timer-based CDP.  In your case since you only have a single link to each upstream switch for each port profile, you're not gaining any benefits by using "sub-group cdp".  Where the CDP channelling method adds value is when you have mulitple uplinks to each switch.  Currently you only have 1 member interface in each sub group using CDP, so you're not gaining anything any benefits.  I'd test it first on your problem uplink port profile (system) and see how it behaves.  If all is well, you may choose to do the same thing with you vm-uplink port profile.

The reason I ask is that this system is somewhat productionalized and will take some coordination with the customer to use one of the hosts for VEM migration.

Regards,

Robert

One test I'd like to you try first is this:

On your original VM-Uplink port profile (using CDP) issue this command:

switchport trunk allowed vlan except 3321-3322

Let me know if this happens to resolve your issue.  I'm suspecting there might be an issue with the length of your "switchport trunk allowed vlan" command".

Regards,

Robert

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: