Cisco Support Community
Community Member

Nexus 1000v new interfaces added not coming up/not coherent

Hello all,

Just posting this in order to find out if anyone has already had the same problem.

running Nexus 1k 4.0(4)SV1(3a) ,over ESX 4.1, installed VEM on esx hosts, everything seemed ok. we are leaving SC and all other interfaces running on vswitch, and migrated smoothly one interface over to the DVS.

The configuration uses vPC host-mode (channel-group auto mode on mac-pinning).

When migrating one pnic over to the dvs, the interface is created on the Nexus, which subsequently creates also the Po interface, but the interface either stays down, either not all the vlans seem to flough through that interface, as seen by vCenter.

When trying to do a  shut/no shut - the VSM CLI just seems stuck waiting for something for about one or two minutes, and then says that the command was invalid.

A sh int status listing also gets stuck when getting over the troubled interface, and then continues after around one minute...

Checking configuration - all is ok, but problem persists. Then, by lack of ideas, we reboot the VSM, and afterwards the interfaces come up/all vlans flow through, without changing config on upstream switch.

After that reboot, regarding that specific host, we can shut/no shut that interface, reboot host, without problems.

When we decide to add a new host, the same symptom occurs... and nothing works except, again a reboot.

Did anybody experience the same? What did you do to solve the issue?

It seems that the first time, the VSM has issues configuring the VEM for the new added interface, but I couldn't find any error message that would corroborate that.

Thanks for any insight


CreatePlease to create content