cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1585
Views
0
Helpful
2
Replies

CAT4900M and NetApp - Performance issue

UHansen1976
Level 1
Level 1

Hi,

I'm struggling with a performance issue between our two NetApp Fas3170-devices.

The setup is quite simple: Each NetApp is connected via two TenGig interfaces to a CAT4900M. The 4900M's are also connected via two TenGig interfaces. Each pair of connections are bundled into an Layer2-etherchannel, configured as a dot.1q trunk. Mode is set to 'ON' on both the 4900 and the NetApp. According to NetApp documentation, this configuration is supported. Across each etherchannel, the vlans 219 and 220 are allowed. Two partitions are configured on the NetApp's, one being active in our primary datacenter and another in our secondary datacenter. Vlan219 and Vlan220 are configured for each the two partitions, using HSRP for gateway redundancy.

None of the interfaces nor the etherchannels shows any signs of misconfiguration. All links are up and etherchannels working as expected, well almost. Nothing indicates packet loss, crc-errors, Input/Output queue-drops or anything the would impact performance. Jumboframe is not configured, although this has been discussed.

The problem is, that we're unable to achieve satisfactory performance, when for instance, performing a volume copy between the two NetApp partitions. Even though we have a teoretical bandwidth of 20Gbps end-to-end, we never climb above 75-80 Mbytes of actual transfer-rate between the two NetApps. So performancewise, is almost looks as if we're "scaled" down to a 1Gig link. No QoS or other kind of ratelimiting has been implemented on the 4900's, so from a network point of view, the NetApps can go full-throttle. NetApp sw has been updated and configurations for both NetApp and 4900's have been revised by NetApp engineers and given a "clean bill of health".

The configuration for the 4900->NetApp etherchannel/interfaces is as follows:

interface TenGigabitEthernet1/5

description *** Trunk NetAPP DC1 ***

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 219,220

switchport mode trunk

udld port aggressive

channel-group 2 mode on

spanning-tree bpdufilter enable

!

interface TenGigabitEthernet1/6

description *** Trunk NetAPP DC1 ***

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 219,220

switchport mode trunk

udld port aggressive

channel-group 2 mode on

spanning-tree bpdufilter enable

!

interface Port-channel2

description *** Trunk Etherchannel DC1 ***

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 219,220

switchport mode trunk

spanning-tree bpdufilter enable

spanning-tree link-type point-to-point

Configuration for 4900->4900 interfaces/etherchannel is as follows:

interface TenGigabitEthernet1/1

description *** Site-to-Site trunk ***

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 10,219,220

switchport mode trunk

udld port aggressive

channel-group 1 mode on

!

interface TenGigabitEthernet1/2

description *** Site-to-Site trunk ***

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 10,219,220

switchport mode trunk

udld port aggressive

channel-group 1 mode on

!

interface Port-channel1

description *** Site-to-Site trunk ***

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 10,219,220

switchport mode trunk

spanning-tree link-type point-to-point

Vlan10 used for mngt-purpose.

Does anyone have similar experiences or suggestions as to why we're having theese performanceissues?

Thanks

/Ulrich

Message was edited by: UHansen1976

2 Replies 2

kanglee
Level 1
Level 1

Ulrich,

I am not familar with Cat4900M but first question is what is the base line permance. When you connect to back to back, what speed you are getting?

Then second question is do you have any speed mismatch in between? or any span session has been configured? Then also what is the performance differences on one switch vs two switches. Bottom line is that systematic isolation approach can be applied for this type of trouble shooting. Also, when did you do this test in terms of network load?

I will post again if I found any known issues on this switch platform.

Thanks

Hi,

Thanks for your reply.

I take it, that you mean baseline performance between the two NetApp's. Well, that's really out of my hands, as another department is responsible for the NetApp's. I'm not aware of any baseline performance, nor have I seen any benchmark tests or anything, that could give me hint.

Just as you suggest, I've gone through the switch-setup systematically. Basically, starting with the physical layer and working my way up. So far, I've found nothing that would indicate a physical problem. The switchport/etherchannel setup has been verified by my peers and also verified by NetApp according to the configuration on the NetApps, as well as the various best-practice documentation availible. Futhermore, I haven't seen any signs of packets drops, crc-errors, massive re-transmissions or anything like not, neither on the switches nor the NetApps.

Recently we had a status-meeting with our NetApp-partner and it looks to me like they're persuing the logical setup on the NetApps, as the're apparently a number of settings etc. that needs adjustment. Also, we're waiting for NetApp tech-support to comment on the traces, config-dump etc. we've send to them.

/Ulrich

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: