Confiure Cisco Switch for Dell NIC Teaming

Answered Question
Mar 4th, 2010

We created NIC Teaming in Dell PowerEdge R710 server. The two NICs are connecting to the Cisco switch 3560G without any issues. If we unplug one cable, the server still works and connects to the network. Do we need to configure EtherChannel on those two ports.


Here are the current configuration.


interface GigabitEthernet0/16

switchport trunk encapsulation dot1q

switchport trunk native vlan 200

switchport mode trunk spanning-tree portfast


interface GigabitEthernet0/17

switchport trunk encapsulation dot1q

switchport trunk native vlan 200

switchport mode trunk spanning-tree portfast

Correct Answer by jfraasch about 6 years 11 months ago

I'd do all four NICs but it really depends on your network design.


If you trunk VLANs across two switches and the link between the switches is greater than 1Gbps, then I would set up the aggregation between all four NICs.  If you just have two switches with a 1Gbps between them then maybe you do two different aggregations, one aggregation (two nics) per switch.  Of course, if you have two different aggregations then you would need two different IP addresses (one per aggregation) and your application would need to know about both.


Again, the right answer depends on your design.  If you can trunk the VLAN across two seperate switches at more than 1Gbps, then use all four.  If you are limited to 1Gbps trunk then I would probably have two different aggregations and adjust the application accordingly.


James

Correct Answer by jfraasch about 6 years 11 months ago

Each individual NIC can only speak at 1Gbps so your connection "speed" will always show 1Gbps on the Dell.  That is the physical connection speed. However, by using LACP you are actually rolling those two NIC cards together into a single pipe and therefore getting a full 2Gbps of throughput.


The cool think about LACP (and etherchannel for that matter) is that you get to Link Aggregate while at the same time employ redundancy.  It kills the bandwidth bottleneck and the redundancy problem with one solution.


But yeah, don't expect to see your NIC speed show 2Gbps.  Its a logical 2Gbps connection, not a physical one.


You would have to test it to prove it.  Although I am sure that Dell already has a WhitePaper out there someone doing the testnig for you,


On a quick search I came across this document: http://www.cisco.com/en/US/prod/collateral/switches/ps6746/ps8742/ps8764/white_paper_c07-443792.pdf where it says:


LACP based teaming extends the functionality by allowing the team to receive load-balanced traffic from the network. This requires that the switch can load balance the traffic across the ports connected to the server NIC team. LACP based load balancing is done on the L2 address. The team of NICs looks like a larger single NIC to the switch, much like an EtherChannel looks between switches. Redundancy is built into the protocol. The Cisco Catalyst Blade Switch 3130 supports the IEEE 802.3ad standard and Gigabit port channels. Servers can now operate in Active/Active configurations. This means that each server team can provide 2 Gigabit of Ethernet Connectivity to the Switching fabric. Failover mechanisms are automatically built into the LACP protocol. The pair of CBS3130s must be in the same ring for the server to support LACP connections. In other words, the server must see the same switch on both interfaces. Otherwise, the user most likely will use the SLB mode.


Hope that helps.

James

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 5 (2 ratings)
Loading.
jfraasch Thu, 03/04/2010 - 13:07

Is this a VM server? I ask because you have it encapsulation Dot1Q which would typically be used in a VM environment.


The only reason you might change the config on the switch is if you are going to do link aggregation with your load balancing..  If you are simply doing some server-side load balacing then this configuration will work.  There should be no need to trunk, however, unless you are going to be running VM.


Hope that helps.


James

Reza Sharifi Thu, 03/04/2010 - 13:40

Hi,


I think it is good idea to put your ports in an Etherchannel.  Etherchannel will provide load sharing of traffic among your 2 Gig links as well as redundancy if one if the Gig link fails.


HTH

Reza

chicagotech Thu, 03/04/2010 - 13:54

Forgot to mention. This is Windows 2008 server.


Reza, you raise a good question. After cerating the NIC teaming, the connection status shows 1GB only. We called Dell Tech support and asked if it should be 2GB. He said no. He said if someone told you NIC teaming would double the speed, that is misleading and matrketing only.


I would like to try EtherChannel, but I am not Cisco Engineer. Does our port configuration is fine? What commands do I add to the port configuration, if I want to try EtherChannel?


Thank you.

chicagotech Fri, 03/05/2010 - 09:52

After I configured EtherChanel, the server can’t ping the network. I may not configure it correctly. Here are the port configuration and other information you may need.


switchport trunk encapsulation dot1q

switchport trunk native vlan 200

switchport mode trunk

channel-group 1 mode on

spanning-tree portfast


Switch#show etherchannel 1 summary

Flags:  D - down        P - in port-channel

        I - stand-alone s - suspended

        H - Hot-standby (LACP only)

        R - Layer3      S - Layer2

        U - in use      f - failed to allocate aggregator

        u - unsuitable for bundling

        w - waiting to be aggregated

        d - default port



Number of channel-groups in use: 1

Number of aggregators:          1


Group  Port-channel  Protocol    Ports

------+-------------+-----------+---------------------------------------------

1      Po1(SU)          -        Gi0/11(P)  Gi0/12(P)


Switch#show etherchannel load-balance

EtherChannel Load-Balancing Operational State

(src-mac):

Non-IP: Source MAC address

  IPv4: Source MAC address

  IPv6: Source IP address


Switch#sh int port-channel1

Port-channel1 is up, line protocol is up (connected)

  Hardware is EtherChannel, address is 001c.f6e0.bb14 (bia 001c.f6e0.bb14)

  MTU 1500 bytes, BW 4000000 Kbit, DLY 10 usec,

    reliability 255/255, txload 1/255, rxload 1/255

  Encapsulation ARPA, loopback not set

  Full-duplex, 1000Mb/s, link type is auto, media type is unknown

  input flow-control is off, output flow-control is unsupported

  Members in this channel: Gi0/11 Gi0/12 Gi0/13 Gi0/20

  ARP type: ARPA, ARP Timeout 04:00:00

  Last input never, output 00:00:00, output hang never

  Last clearing of "show interface" counters never

  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0

  Queueing strategy: fifo

  Output queue: 0/40 (size/max)

  5 minute input rate 15000 bits/sec, 16 packets/sec

  5 minute output rate 5033000 bits/sec, 12343 packets/sec

    689522 packets input, 66899012 bytes, 0 no buffer

    Received 44176 broadcasts (0 multicast)

    0 runts, 0 giants, 0 throttles

    0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored

    0 watchdog, 31445 multicast, 0 pause input

    0 input packets with dribble condition detected

    94825011 packets output, 977292733 bytes, 0 underruns

    0 output errors, 0 collisions, 3 interface resets

    0 babbles, 0 late collision, 0 deferred

    0 lost carrier, 0 no carrier, 0 PAUSE output

    0 output buffer failures, 0 output buffers swapped out

jfraasch Fri, 03/05/2010 - 10:14

Do your NICs support Cisco Etherchannel?  I have seen them with Link Aggregation (LACP) but not with Cisco Etherchannel as this is Cisco specific protocol.  Double check your NIC settings to see if they support it.


If they do not then just change the config on the switch to LACP by changing the mode "on" to mode "active" on both the port-channel and interface config.


James

chicagotech Fri, 03/05/2010 - 12:48

OK, I got the Dell engineer respond “I just checked with my NOS Analyst and we don’t support the Cisco Etherchannel.  We do support LACP though”


After I configure LACP as shown below,

switchport trunk encapsulation dot1q

switchport trunk native vlan 200

switchport mode trunk

channel-protocol lacp

channel-group 1 mode active

spanning-tree portfast


Those two ports are suspended with orange light on.


Here are more information.


interface Port-channel1

switchport trunk encapsulation dot1q

switchport trunk native vlan 200

switchport mode trunk



Switch#show etherchannel 1 summary

Flags:  D - down        P - in port-channel

        I - stand-alone s - suspended

        H - Hot-standby (LACP only)

        R - Layer3      S - Layer2

        U - in use      f - failed to allocate aggregator

        u - unsuitable for bundling

        w - waiting to be aggregated

        d - default port



Number of channel-groups in use: 1

Number of aggregators:           1


Group  Port-channel  Protocol    Ports

------+-------------+-----------+----------------------------

1      Po1(SD)         LACP      Gi0/11(s) Gi012(s)


Switch#show spanning-tree interface port-channel 1

no spanning tree info available for Port-channel1

Reza Sharifi Fri, 03/05/2010 - 13:12


You need to bring them up all at the same time


Do this on the switch:


1-Go to port Gi0/11 and  Gi012 and issue this command under each interface "shut"

2-Go to interface Port-channel1 and issue this command "shut"

3-Now issue this command "no shut"

4-Now you should see both the physical interface and portchannel 1 come to up and up mode


HTH

Reza

chicagotech Fri, 03/05/2010 - 13:29

Thank you for the tip. However, I did try shut and no shut b

efore. I just tried them again, but that doesn't fix.

chicagotech Fri, 03/05/2010 - 13:44


Current configuration : 4748 bytes
!
! Last configuration change at 15:27:31 CST Fri Mar 5 2010 by blin
! NVRAM config last updated at 16:58:21 CST Thu Feb 25 2010 by blin
!
version 12.2
no service pad
service timestamps debug datetime msec
service timestamps log datetime
service password-encryption
service sequence-numbers
!
hostname Switch
!
no logging console guaranteed
no logging console


aaa new-model
aaa authentication login default local
aaa authentication login MYTAC group radius local
aaa authentication dot1x default group radius
aaa authorization network default group radius
aaa accounting exec default start-stop group radius
!
aaa session-id common
clock timezone CST -6
clock summer-time CDT recurring
ip subnet-zero
!
!
!
!
dot1x system-auth-control
no file verify auto
spanning-tree mode pvst
spanning-tree extend system-id
!
vlan internal allocation policy ascending
!
interface Port-channel1
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
!
interface GigabitEthernet0/1
switchport access vlan 4
!
interface GigabitEthernet0/2
switchport trunk encapsulation dot1q
switchport trunk native vlan 600
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/3
switchport trunk encapsulation dot1q
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/4
switchport trunk encapsulation dot1q
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/5
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/6
description win2008
switchport access vlan 254
switchport mode access
spanning-tree portfast
spanning-tree bpduguard enable
!
interface GigabitEthernet0/7
!
interface GigabitEthernet0/8
!
interface GigabitEthernet0/9
!
interface GigabitEthernet0/10
!
interface GigabitEthernet0/11
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
channel-group 1 mode active
spanning-tree portfast
!
interface GigabitEthernet0/12
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/13
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/14
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/15
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/16
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/17
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/18
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/19
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/20
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
channel-protocol lacp
channel-group 1 mode active
spanning-tree portfast
!
interface GigabitEthernet0/21
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode access
spanning-tree portfast
!
interface GigabitEthernet0/22
switchport trunk encapsulation dot1q
switchport trunk native vlan 200
switchport mode trunk
spanning-tree portfast
!
interface GigabitEthernet0/23
switchport mode access
dot1x pae authenticator
dot1x port-control auto
spanning-tree portfast
!
interface GigabitEthernet0/24
switchport trunk encapsulation dot1q
switchport mode trunk
!
interface GigabitEthernet0/25
!
interface GigabitEthernet0/26
!
interface GigabitEthernet0/27
!
interface GigabitEthernet0/28
!
interface Vlan1
ip address 10.0.20.150 255.255.0.0
!
ip default-gateway 10.0.0.2
ip classless
ip http server
ip radius source-interface Vlan1
!
snmp-server community public RO
snmp-server community private RW
radius-server host 10.0.20.55 auth-port 1645 acct-port 1646 key 7 0205065C
radius-server source-ports 1645-1646
!
control-plane
!
!
line con 0
line vty 0 4
password 7 060506324F41
login authentication MYTAC
line vty 5 15
!
ntp clock-period 36028558
ntp server 128.105.39.11
end


Switch#

Reza Sharifi Fri, 03/05/2010 - 13:50

Under interface GigabitEthernet0/12

you are missing "channel-group 1 mode active"

chicagotech Fri, 03/05/2010 - 14:00

Sorry, I changed the port from g0/12 to g0/20. Both g0/11 anf

go/20 should be the same.

Giuseppe Larosa Sat, 03/06/2010 - 10:03

Hello,

>> Those two ports are suspended with orange light on.


this should happen because server NICs are not sending LACP frames, the switch after waiting some time put the port in suspended state.


to verify this you can use


sh log | inc %EC-5


you should see message like the following:


Mar  5 16:45:14: %EC-5-UNBUNDLE: Interface Gi6/7 left the port-channel Po3
Mar  5 16:45:14: %EC-5-UNBUNDLE: Interface Gi6/8 left the port-channel Po3
Mar  5 16:45:19: %EC-5-L3DONTBNDL2: Gi6/7 suspended: LACP currently not enabled on the remote port.
Mar  5 16:45:20: %EC-5-L3DONTBNDL2: Gi6/8 suspended: LACP currently not enabled on the remote port.


if you find similar messages, the key point is how to enable the server to start the LACP daemon on its NICs


Hope to help

Giuseppe

Reza Sharifi Fri, 03/05/2010 - 12:13

The servers support LACP, so can you change  channel-group 1 mode on to channel-group 1 mode active.  Also on the server side do the same and make sure LACP is turned on. Did you also add the switchport mode trunk andswitchport trunk native vlan 200 to the physical ports as well?


HTH

Reza

chicagotech Fri, 03/05/2010 - 12:51

I can't see the place turn of LACP. How do you turn on LACP in Dell server?

Reza Sharifi Fri, 03/05/2010 - 13:02

I am not a server guy but I have seen server guys doing it.  when you create NIC teaming using Broacom software on MS OS there is an option to do LACP or some other Microsoft link aggregation and so you want pick LACP.  If you are using an ESX server I think they have their own software for NIC teaming


Reza 

chicagotech Mon, 03/08/2010 - 10:34

I re-created Teaming in Dell by selecting LACP. Now, I can access the network. But what? I don't see any different. Let me go back to my original question.


My original teaming on Dell was setup to use Smart Load Balance and Failover without adding channel-group 1 mode active. It works with 1 GB speed show up. Now I configured teaming uses LACP with channel-group mode active. The speed still shows 1GB.


1. Can I get double speed if I create NIC teaming? Or it just gives me failover?

2. If it does give me move speed, how can we know if the Dell connection shows 1GB. Or can we have some tools to test it?

3. What are differnt between load balance/failover and LACP? Or which is better?

4. If the LACP setting doesn't give me more speed, should we go back to the original configuration without LACP and channel-group 1 mode active?

Correct Answer
jfraasch Mon, 03/08/2010 - 10:50

Each individual NIC can only speak at 1Gbps so your connection "speed" will always show 1Gbps on the Dell.  That is the physical connection speed. However, by using LACP you are actually rolling those two NIC cards together into a single pipe and therefore getting a full 2Gbps of throughput.


The cool think about LACP (and etherchannel for that matter) is that you get to Link Aggregate while at the same time employ redundancy.  It kills the bandwidth bottleneck and the redundancy problem with one solution.


But yeah, don't expect to see your NIC speed show 2Gbps.  Its a logical 2Gbps connection, not a physical one.


You would have to test it to prove it.  Although I am sure that Dell already has a WhitePaper out there someone doing the testnig for you,


On a quick search I came across this document: http://www.cisco.com/en/US/prod/collateral/switches/ps6746/ps8742/ps8764/white_paper_c07-443792.pdf where it says:


LACP based teaming extends the functionality by allowing the team to receive load-balanced traffic from the network. This requires that the switch can load balance the traffic across the ports connected to the server NIC team. LACP based load balancing is done on the L2 address. The team of NICs looks like a larger single NIC to the switch, much like an EtherChannel looks between switches. Redundancy is built into the protocol. The Cisco Catalyst Blade Switch 3130 supports the IEEE 802.3ad standard and Gigabit port channels. Servers can now operate in Active/Active configurations. This means that each server team can provide 2 Gigabit of Ethernet Connectivity to the Switching fabric. Failover mechanisms are automatically built into the LACP protocol. The pair of CBS3130s must be in the same ring for the server to support LACP connections. In other words, the server must see the same switch on both interfaces. Otherwise, the user most likely will use the SLB mode.


Hope that helps.

James

chicagotech Mon, 03/08/2010 - 12:01

James, Thank you forthe detail information. After reading your post, I decide to keep the LACP configuration. What are the disadvantages of using LACP?


This server is our GIS server running SQL. It comes with 4 NICs. Should I do one teaming with 4 NICs, but two conenct to a swicth and other two NICs conenct to another same Cisco swicth with the same port configuration?


We have a lot HP servers. When we create a NIC teaming with two NICs on the HP server, it always shows 2GB. But the Dell engineer said it is misleading or marketing. You won't get the 2GB.

Correct Answer
jfraasch Mon, 03/08/2010 - 12:39

I'd do all four NICs but it really depends on your network design.


If you trunk VLANs across two switches and the link between the switches is greater than 1Gbps, then I would set up the aggregation between all four NICs.  If you just have two switches with a 1Gbps between them then maybe you do two different aggregations, one aggregation (two nics) per switch.  Of course, if you have two different aggregations then you would need two different IP addresses (one per aggregation) and your application would need to know about both.


Again, the right answer depends on your design.  If you can trunk the VLAN across two seperate switches at more than 1Gbps, then use all four.  If you are limited to 1Gbps trunk then I would probably have two different aggregations and adjust the application accordingly.


James

Actions

This Discussion

Related Content