cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
4110
Views
0
Helpful
2
Replies

link aggregation on cisco 3550 and 3560.

Flamewires
Level 1
Level 1

Hey I'm trying to convert the FreeBSD servers at work to dual-gig lagg links from regular gigabit links. Our production servers are on a 3560. I have a small test environment on a 3550. I have achieved fail-over, but am having troubles achieving the speed increase. All servers are running gig intel (em) cards. The configs for the servers are:

BSDServer:

#!/bin/sh

#bring up both interfaces

ifconfig em0 up media 1000baseTX mediaopt full-duplex

ifconfig em1 up media 1000baseTX mediaopt full-duplex

#create the lagg interface

ifconfig lagg0 create

#set lagg0's protocol to lacp, add both cards to the interface,

#and assign it em1's ip/netmask

ifconfig lagg0 laggproto lacp laggport em0 laggport em1 ***.***.***.*** netmask 255.255.255.0

The switches are configured as follows:

#clear out old junk

no int Po1

default int range GigabitEthernet 0/15 - 16

# config ports

interface range GigabitEthernet 0/15 - 16

description lagg-test

switchport

duplex full

speed 1000

switchport access vlan 192

spanning-tree portfast

channel-group 1 mode active

channel-protocol lacp

**** switchport trunk encapsulation dot1q ****

no shutdown

exit

interface Port-channel 1

description lagginterface

switchport access vlan 192

end


obviously change 1000's to 100's and GigabitEthernet to FastEthernet for the 3550's config, as that switch has 100Mbit speed ports.

With this config on the 3550, I get failover and 92Mbits/sec speed on both links, simultaneously, connecting to 2 hosts.(tested with iperf) Success. However this is only with the "switchport trunk encapsulation dot1q" line.

First, I do not understand why I need this, I thought it was only for connecting switches. Is there some other setting which this turns on that is actually responsible for the speed increase?

Second,

This config does not work on the 3560. I get failover, but not the speed increase. Speeds drop from gig/sec to 500Mbit/sec when I make 2 simultaneous connections to the server with or without the encapsulation line. I should mention that both switches are using source-mac load balancing.

Any ideas/corrections/questions would be helpful, as the gig switches are what I actually need the lagg links on. Ask if you need more information.

2 Replies 2

Nagaraja Thanthry
Cisco Employee
Cisco Employee

What is the operating mode on the FreeBSD side? If it is failover mode, it looks like the server only uses master port for data transmission. As far as loadbalancing on Cisco Side is concerned, since all traffic is going to one IP address (and MAC), it will use only one port. You could try using src-dst-IP or src-dst-mac for load balancing and see if that helps. (port-channel load-balance src-dst-ip/src-dst-mac). Hope this helps.

Regards,

NT

Its lacp on the BSD side.
the 3550 only supports src-mac and dst-mac fowarding. I could try changing this on the 3560, I will do this tmr.

On the 3550 enabling src-mac causes the load balancing to occur based on a hash of the src IP and scr mac which should be different.

In my test I am using Iperf. I have the server(lagg box) setup as the server(iperf -s), and the client computers are client(iperf -c , so the source mac(and IP) are different for both connections.

Review Cisco Networking products for a $25 gift card