04-11-2014 08:28 AM - edited 03-07-2019 07:04 PM
I have a couple questions that I cant find the answer to (which surprised me).
Do all NIC(s) in a team need to be plugged into the same blade (switch), or is it technically a single stacked switch?
HP Server NIC teaming and ESX Nic teaming docs have different hash methods. (see links below).
How do you reconcile this or does it matter? Should you specify load-balancing method per switch module and plug all nics in a team into that specfic switch?
http://www.cisco.com/c/en/us/support/docs/lan-switching/etherchannel/98469-ios-etherchannel.html
Thank you for any responses,
JT
Solved! Go to Solution.
04-11-2014 09:56 AM
JT,
If the 6500s are not running in VSS, you can only connect your NICs to the same switch but different blades. If you are running VSS, you can split the connections between switches.
HTH
04-12-2014 09:31 AM
Is each slot a switch or a blade? That is where I am getting confused. If I have three 1gb blades, is that a single switch?
The linecards in a 6500 chassis are not separate switches. The chassis is the switch.
As Reza says, if you have a pair of 6500s and they are not running VSS then for etherchannel you can only connect both NICs to one chassis but for redundancy you should connect each NIC to a different linecard.
If you are running VSS then you can connect each NIC to a different chassis.
Jon
04-11-2014 09:56 AM
JT,
If the 6500s are not running in VSS, you can only connect your NICs to the same switch but different blades. If you are running VSS, you can split the connections between switches.
HTH
04-11-2014 11:39 AM
How can I tell what the 6500 is running in VSS?
Is each slot a switch or a blade? That is where I am getting confused. If I have three 1gb blades, is that a single switch?
04-12-2014 09:31 AM
Is each slot a switch or a blade? That is where I am getting confused. If I have three 1gb blades, is that a single switch?
The linecards in a 6500 chassis are not separate switches. The chassis is the switch.
As Reza says, if you have a pair of 6500s and they are not running VSS then for etherchannel you can only connect both NICs to one chassis but for redundancy you should connect each NIC to a different linecard.
If you are running VSS then you can connect each NIC to a different chassis.
Jon
04-11-2014 01:11 PM
What do you recommend concerning the different load-balancing specified in the two docs? It seems vmware is probably the most restrictive, but an HP and windows servers should be able to handle most of the options.
I think I can set the 6500 to specify load-balance per blade, but that would require I connect all connections of the same load-balancing to a single blade, which would eliminate switch fault tolerance.
04-12-2014 08:51 AM
JT,
One other thing you should look into is that sometimes you don't need to have portchannles towards the ESX hosts, because they use NIC teaming on and when one logical NIC fails the traffic simply shifts to the other logical NIC without loosing a ping packet. The same thing happened when we disconnected a physical link. We did this testing with Dell Servers, 2ks and 6ks. The 6ks do not run any portchannel, just simple trunk and it works really well. Now, you are using 6500 and HP and it maybe all different, but just FYI and if you want to test it.
HTH
04-12-2014 09:31 AM
It's an etherchannel so you set the load balancing method on the 6500 per etherchannel not per linecard.
Basically you pick the method that will most evenly distribute eg. if most of the connectivity to and from the server was from remote subnets then src-dst IP would be a good choice.
Jon
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide