09-14-2007 04:25 AM - edited 03-05-2019 06:29 PM
Hi all, I need some help. I have a client we are setting up etherchannel on a 6500 to a server. I have successfully done this once already without any issues and its in production. I have gone to setup another one for another server, 2 ports only. I labled it port-channel 15, brought it up, and make sure on the ports it was "channel-group" appropriate. But, it doesnt seem to work correctly. I can ping some servers from the server Im working with on this, but not most other servers. I tried to do a "clear arp-cache", but when I do a show arp immediately after, its like it doesnt clear it out. Oh, when I take etherchannel off, and break the server teaming of the nics, all works fine. This sound familiar to anyone? Any help would be apprecaited. Thanks.
Solved! Go to Solution.
09-14-2007 08:31 PM
Sorry this is not so simple an answer as it would seem it could be, but there are quite a few variables here. The version of HP teaming software matters as well as the feature set you are configuring (there are at least 7 different types of HP NIC Teaming available and they are not all compatible with the same Cisco Etherchannel configurations). I believe the older versions of HP software actually supported Cisco Etherchannel (PAGP) protocol, while the current ones do not (like the HP ProCurve switches used to but now do not depending on whether you have upgraded the SW release on certain switches.)
There are a few other things that might help:
You need the force the channel on, which you did:
You might want to take a look at the HP configuration on the server side:
http://h71028.www7.hp.com/ERC/downloads/4AA0-1139ENW.pdf?jumpid=reg_R1002_USEN
On page #18 there is a flow chart to help decide what features you are need to enable in the HP teaming software. They certainly give you lots of options, but you need to decide what scenario you are trying to support.
Also, there can be issues with the default load balancing algorithm set up in the Catalyst 6500. The most effective Etherchannel balancing hash is arguably the one where balancing is based on all four parameters. (src-ip/dst-ip/src-port/dst-port) This works extremely well between two Cat6500 switches, but most server Etherchannel stacks cannot do that without significant overhead, if they can do it at all. Most will function best with a MAC-based hashing algorithm instead. These are global Catalyst switch commands and will affect all EtherChannels in the switch.
Here is the Cisco Etherchannel Guide for 12.2(SX):
Here is how you set the load-balancing for a Cat6500 to MAC, IP or Src/Dst-Port:
So as you can see, what would be helpful is to decide what type of Etherchannel you want the links to provide, decide how you want that supported and then configure the parameters to match.
Think about questions like:
Why do you want Etherchannel?
Redundancy in case of? Cable failure, switch port failure, NIC failure?
More bandwidth? (some teaming will do this, while some teaming will not)
More bandwidth between a single pair of hosts? (This can be difficult for HP Teaming to do without the extra License Pack.)
The Cat6500 line cards come into play here. The WS-X6148*-GE, 6548*-GE family of line cards will not produce more than 1Gbps of bandwidth even when they have several physical links if the links come from the same group of 8 ports.
http://www.cisco.com/warp/public/473/4.pdf
Many times if more bandwidth is what is required, Jumbo frames are a better or at least useful adjunct to other solutions due to the increased throughput and lower overhead for the servers.
I know this is not the simplest answer, but it seems better for you to have the information you need to make the decision that works best in your environment.
09-14-2007 05:31 AM
Which is you using mode for EtherCannel and Interfaces Team?
09-14-2007 06:08 AM
Its mode "on" on the 6500. Here is the config:
interface Port-channel15
switchport
switchport mode access
no ip address
spanning-tree portfast
interface GigabitEthernet1/1
switchport
switchport mode access
no ip address
speed 1000
duplex full
spanning-tree portfast
channel-group 15 mode on
interface GigabitEthernet1/2
switchport
switchport mode access
no ip address
speed 1000
duplex full
spanning-tree portfast
channel-group 15 mode on
09-14-2007 06:23 AM
Then for server Interfaces Team you must use Team type "Generic Trunking (FEC/GEC)/802.3ad-Draft Static".
09-14-2007 08:39 AM
What Sups/linecards are you using?
09-14-2007 08:54 AM
What type of servers are you trunking too?
VM's Blade center are you taging the native vlan on the servers?
09-14-2007 09:38 AM
Im not doing trunking on them at all. I figured since the servers reside on vlan 1, and that vlan is all that will traverse this copper, it all just needs to be an access port. It seems to work with the first one we did with no issues. Are you saying it has to be a trunk port?
09-14-2007 02:35 PM
You do not need to trunk, but there are a few things to note about channels. You can think of channels as being a uni-directional feature. The feature of load-balancing has to do with which leg carries the traffic when one side needs to transmit. There are several hashing mechanisms used, so deciding why you want to channel is a key to getting what you want.
Most servers that support channels do not support any sort of channel protocol. Cisco by default uses their PAGP and hosts don't gernally speak that.
For example if you are using VMware ESX v3.0.x and wish to trunk say VLAN 1 and 20, this works. You can have several ports connected to a vSwitch on a server like this. The traffic from the VMware server guests will be spread across the links based on which virtual port it is connected to. This works without a "Port-Channel" definition on the switch.
interface GigabitEthernet7/1
description ESX Trunk
switchport
switchport trunk allowed vlan 1,20
switchport mode trunk
switchport nonegotiate
no ip address
logging event link-status
logging event spanning-tree status
logging event trunk-status
flowcontrol receive desired
spanning-tree portfast trunk
end
09-14-2007 06:35 PM
We are using the HP teaming software on the microsoft server, and the config used above. Are you suggesting I use a trunk port to resolve this?
09-14-2007 08:31 PM
Sorry this is not so simple an answer as it would seem it could be, but there are quite a few variables here. The version of HP teaming software matters as well as the feature set you are configuring (there are at least 7 different types of HP NIC Teaming available and they are not all compatible with the same Cisco Etherchannel configurations). I believe the older versions of HP software actually supported Cisco Etherchannel (PAGP) protocol, while the current ones do not (like the HP ProCurve switches used to but now do not depending on whether you have upgraded the SW release on certain switches.)
There are a few other things that might help:
You need the force the channel on, which you did:
You might want to take a look at the HP configuration on the server side:
http://h71028.www7.hp.com/ERC/downloads/4AA0-1139ENW.pdf?jumpid=reg_R1002_USEN
On page #18 there is a flow chart to help decide what features you are need to enable in the HP teaming software. They certainly give you lots of options, but you need to decide what scenario you are trying to support.
Also, there can be issues with the default load balancing algorithm set up in the Catalyst 6500. The most effective Etherchannel balancing hash is arguably the one where balancing is based on all four parameters. (src-ip/dst-ip/src-port/dst-port) This works extremely well between two Cat6500 switches, but most server Etherchannel stacks cannot do that without significant overhead, if they can do it at all. Most will function best with a MAC-based hashing algorithm instead. These are global Catalyst switch commands and will affect all EtherChannels in the switch.
Here is the Cisco Etherchannel Guide for 12.2(SX):
Here is how you set the load-balancing for a Cat6500 to MAC, IP or Src/Dst-Port:
So as you can see, what would be helpful is to decide what type of Etherchannel you want the links to provide, decide how you want that supported and then configure the parameters to match.
Think about questions like:
Why do you want Etherchannel?
Redundancy in case of? Cable failure, switch port failure, NIC failure?
More bandwidth? (some teaming will do this, while some teaming will not)
More bandwidth between a single pair of hosts? (This can be difficult for HP Teaming to do without the extra License Pack.)
The Cat6500 line cards come into play here. The WS-X6148*-GE, 6548*-GE family of line cards will not produce more than 1Gbps of bandwidth even when they have several physical links if the links come from the same group of 8 ports.
http://www.cisco.com/warp/public/473/4.pdf
Many times if more bandwidth is what is required, Jumbo frames are a better or at least useful adjunct to other solutions due to the increased throughput and lower overhead for the servers.
I know this is not the simplest answer, but it seems better for you to have the information you need to make the decision that works best in your environment.
09-17-2007 04:14 AM
Wow, thanks for the reply. Ill certainly look into this. Thanks again.
09-18-2007 09:52 AM
corey@semgrouplp.com , I think your post was very helpful and I appreciate it. Here is what I found. I read through the document you provided, was helpful. I did turn the switch side to be a trunking port-channel instead of access port-channel. Didnt help, but IN THE HP SOFTWARE, I found a setting that had options like automatically detect, let switch determine, set to something something, etc, etc.... (Obviously not the exact words), but I set it to say basically let the switch determine how this is setup, and it worked afterwards. Not sure why, but what was very odd was that it COULD ping a very few IP addresses, and NOT ping most. But, after setting the setting just described, ALL IPs PINGED. Thanks all, for your insight and help.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: