High Discard rates on Nexus 1000v interfaces

Unanswered Question
Mar 25th, 2010

We recently installed our UCS chasis running ESX and the 1000v.  I was just wondering if it is normal to see high amounts of discards on most of the 1000v interfaces?  Everything seems to be working fine, its just something we noticed while monitoring the 1000v via SNMP.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
mmehta Thu, 03/25/2010 - 14:54


Do you have uplinks configured as part of port-channel with "vpc mac-pinning" configuration ? If yes,

the multicast/broadcast traffic is received only on a designated interface, and the other interfaces

will drop it. These drops could be what you are observing. Can you please provide the configuration

of uplink port-profile, and 'show command' output indicating discards.



jms112080 Fri, 03/26/2010 - 08:46

The uplink profile is configured as a port-channel and that is applied to all 8 ESX servers.  We are not intentionally doing any mac-pinning, unless that is the default behavior for the 1000v with port-channels.

Hey JMS, saw yoor post via a Google query. I have noticed the same thing using Solarwinds to monitor the Nexus. 21 Million packets dropped in 8 hours. All the nexus 1000v uplink ports are the ones discarding the traffic. I wonder why? Have you been able to find out any more on this. If you find out something can you pass it on to this former bubblehead? Thanks.

ryan.lambert Mon, 04/19/2010 - 11:42

I have a similar issue, using vPC with mac-pinning on the 1KV.

From what I can tell, the symptoms seem to follow what Munish suggested. No actual performance impact, etc... although it does throw a little bit of a wrench into interpreting interface statistics on the DVS when troubleshooting.


This Discussion