Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 
New Member

Nexus 1000v QoS Limitations



I'm looking for a bit of clarity on the resource limitation for policy-maps on the Nexus 1000v. From the docs ( it seems pretty clear, however, when I look at my modules on the 1k, there are differing numbers of 'instances' where the same policy map is applied to our management uplinks.


As an example, we have a policy-map attached to our management uplinks (consisting of two Eth uplinks bundled into a PC), and we can see this output below:

atl2-vsm-01# sh resource-availability qos-queuing

Maximum number of instances per DVS is 4096
The number of instances created is 1116
The number of instances available is 2980

Maximum number of instances per module is 300

Following table shows the per module instance usage
Module  Used  Available
     3    16   284
     4    25   275
     5    20   280
     6    19   281


The same policy-map is applied to each one of those modules, and is attached to the same number of management uplinks for each module, however the 'used' column is what's throwing me a curve.


1. When the output references 'instances' do those instances correlate with the 'used' column in the table that references the 'per module' (I know this seems obvious from the output).

2. If the same policy-map is applied to each of these modules, and is applied to the same two Eth uplinks, howcome the 'used' column varies per host?


We are a Service Provider and have a large customer footprint, and I'm attempting to find a solution where we can to rate-limiting for customers in our environment based on the customers purchased CIR, however there seems to be scalability issues regarding using policing on the 1k, so I'm looking for any insight into how to solve this problem.


Thanks in advance for any help provided.



Everyone's tags (1)
CreatePlease to create content