Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

M81KR, VIC 1240, and VIC 1280

I have blades with these 3 different adapters 


M81KR looks like has only 2DCE interfaces and ESXi is showing 10GBps Full duplex


VIC 1240 shows it only has 4 DCE Interfaces but they were numbered in ucsm as 1,3,5,7 instead of 1234. why?

in ESXi it shows as 20GBps. 



VIC 1280 shows it has 8DCE Interfaces  and in ESXi it shows 40GBps


i am confused on the relationship between the number of IOM ports connected (Server Ports) to the actual bandwidth the blade gets based on the adapter it has.


I current have 2 Server ports per iom per chassis connected.  The IOM we using is the 2208xp. Would it give me more bandwidth per blade if I went to 4 ports per iom or even 8?

  • Unified Computing

Hi Tonyplease seehttps:/

Hi Tony

please see

and the attachment I posted there





New Member

hi what should the load



what should the load balancing policy be in ESXi?


Should it be route based on originating virtual port?


You can specify loadbalancing

You can specify loadbalancing/failover per switch, and/or (VMkernel) interface; the latter overrides the switch setting.

For regular VMtraffic, originating virtual port is recommended, active/active

see eg.
- UCS does not take care of load balancing
- The load balancing etc should be taken care of at the OS level
- So assuming that you are using the VMWare vSwitch
- As an example, you can allocate two vmnics to the same vSwitch
- One of these vmnics can point to FI A and the other to FI B
- You can choose the FI preference in the UCS vNIC settings
- Then choose 'Route based on originating virtual port ID' as the load 
balance algorithm
- This would distribute your traffic across the two interfaces


However, you might configure VMkernel interfaces for eg. VMotion or iSCSI/NFS differently

2 vnic's one connected to fabric A resp B, no hardware failover flag set; and go active/passive


vmnic0 (active)

vmnic1 (passive)

Policy Fail back


Vmnic1 (active)

Vmnic0 (passive)

Policy Fail back

I hope you are aware of the fact, that e.g. in case of VMotion, if source is on A, destination on B, same Vlan -> traffic has to go out of UCS, which can be a bottlenek, and adds at least one additional hop.


This widget could not be displayed.