cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8631
Views
5
Helpful
3
Replies

Configure 3560X switch for iSCSI connectivity

smeek
Level 1
Level 1

I haven't worked with infrastructure much lately but planning on replacing a Dell switch used for a small iSCSI SAN with a Cisco 3560x. Switch is sstorage ONLY, network handled through different switches. I believe I want to disable spanning tree and configure trunking for the 4 GB ports connected to the storage array. The switch is providing connectivity to SAN via multiple GB connections and then also multiple GB connections to 3 Vmware hosts.Can you look over the changes I made to config below and let me know if I am on the right track.

I also plan to add a second switch later with the 10GB modules to interconnect for redundancy (but not worried about that part for now). I do not plan on using jumbo frames.

Here are my questions

1) Is the best way to globally disable spanning tree for the switch?

2) Did I trunk the ports correctly below?

3) Should I apply any specific SmartRoles to the 4 SAN NICs (GE1-4) or to host NICs (GE13-20)?

4) Anything additional I should do to handle iSCSI data? 

Here is a copy of the config so far with my changes marked in RED.

------------------------------------------------------------------------------------------

storage1#wr t
Building configuration...

Current configuration : 1696 bytes
!
version 12.2
no service pad
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname storage1
!
boot-start-marker
boot-end-marker
!
enable secret 5 BLAHBLAHBLAH.
enable password BLAHBLAHBLAH2
!
!
!
no aaa new-model
system mtu routing 1500
!
!
cluster enable storagecluster 0
!
!
!
spanning-tree mode pvst
spanning-tree extend system-id

no spanning-tree vlan 1

!

!

!

vlan internal allocation policy ascending

!

!

!

interface FastEthernet0

ip address x.x.x.x 255.255.255.0

!

interface GigabitEthernet0/1

description Stonefly ports

switchport mode access

channel-group 5 mode desirable non-silent

!

interface GigabitEthernet0/2

description Stonefly ports

switchport mode access

channel-group 5 mode desirable non-silent

!

interface GigabitEthernet0/3

description Stonefly ports

switchport mode access

channel-group 5 mode desirable non-silent

!

interface GigabitEthernet0/4

description Stonefly ports

switchport mode access

channel-group 5 mode desirable non-silent

!

interface GigabitEthernet0/5

!

interface GigabitEthernet0/6

!

interface GigabitEthernet0/7

!

interface GigabitEthernet0/8

!

interface GigabitEthernet0/9

!

interface GigabitEthernet0/10

!

interface GigabitEthernet0/11

!

interface GigabitEthernet0/12

!

interface GigabitEthernet0/13
description host NICs
!
interface GigabitEthernet0/14
description host NICs
!
interface GigabitEthernet0/15
description host NICs
!
interface GigabitEthernet0/16
description host NICs
!
interface GigabitEthernet0/17
description host NICs
!
interface GigabitEthernet0/18
description host NICs
!
interface GigabitEthernet0/19
description host NICs
!
interface GigabitEthernet0/20
description host NICs

!

interface GigabitEthernet0/21

!

interface GigabitEthernet0/22

!

interface GigabitEthernet0/23

!

interface GigabitEthernet0/24

!

interface GigabitEthernet1/1

!

interface GigabitEthernet1/2

!

interface GigabitEthernet1/3

!

interface GigabitEthernet1/4

!

interface TenGigabitEthernet1/1

!

interface TenGigabitEthernet1/2

!

interface Vlan1

ip address x.x.x.x 255.255.255.0

!

ip http server

ip http secure-server

!

ip sla enable reaction-alerts

snmp-server community public RO

!

!

line con 0

line vty 0 4

password BLAHBLAH3

login

line vty 5 15

password BLAHBLAH3

login

!

end

3 Replies 3

smeek
Level 1
Level 1

The hosts are running Vmware ESXi 5.x, in case that impacts any feedback.

Were you able to find answers?  I have the 3560X as well and am curious too of the configuration. Two switches are only for iscsi traffic in a vlan connected to dell SAN with two controllers (3 total links from the active san controller is plugged into the iscsi switch--two of the type7 controller ports are plugged into swi1 and the other is in swi2).  The other (4th) port on the type7 san controller is for management--wired to a third switch. 

I have the following config snippet. I'm having some performance issues and wondering if anyhting needs adjusting? Is spanning-tree required, and configured correctly, is flowcontrol turned on correctly?  If I run a show int gi0/3 it says flow-control is off even though the NIC has it turned on.  Is portfast required on the switch ports?

\

version 12.2

no service pad

service timestamps debug datetime msec

service timestamps log datetime msec

service password-encryption

service sequence-numbers

no service dhcp

!

hostname iscsiswitch

!

boot-start-marker

boot-end-marker

!

!

no logging console

!

!

!

aaa group server radius RADIUS

!

aaa authentication login CONSOLE line

!

!

!

!

!

aaa session-id common

system mtu routing 9198

!

!

!

ip domain-name .....

vtp mode off

!

!

!

!

spanning-tree mode rapid-pvst

spanning-tree loopguard default

spanning-tree portfast default

spanning-tree portfast bpduguard default

spanning-tree portfast bpdufilter default

spanning-tree extend system-id

!

!

!

port-channel load-balance src-dst-mac

!

vlan internal allocation policy ascending

vlan dot1q tag native

!

vlan 12

name iSCSI

!

vlan 665

name Management

!

vlan 99

name UNUSED_PORTS

!

ip ssh source-interface FastEthernet0

ip ssh version 2

!

!

!

!

!

!

interface Port-channel11

description Uplink to other iscsi switch

switchport access vlan 665

switchport trunk native vlan 665

switchport mode access

switchport nonegotiate

ip flow ingress

ip flow egress

flowcontrol receive on

spanning-tree bpdufilter disable

spanning-tree bpduguard disable

!

interface FastEthernet0

description Management Interface

ip flow ingress

ip flow egress

ip address 10.10.x.x 255.255.255

!

interface GigabitEthernet0/1

description server 1

switchport access vlan 12

switchport mode access

ip flow ingress

ip flow egress

flowcontrol receive desired

!

interface GigabitEthernet0/2

description server 2

switchport access vlan 12

switchport mode access

ip flow ingress

ip flow egress

flowcontrol receive desired

!

interface GigabitEthernet0/3

description  server 3

switchport access vlan 12

switchport mode access

ip flow ingress

ip flow egress

flowcontrol receive desired

...............

interface GigabitEthernet1/1

description Uplink to ToOtherSwitch (Gi1/1)

switchport access vlan 665

switchport trunk native vlan 665

switchport mode access

switchport nonegotiate

ip flow ingress

ip flow egress

flowcontrol receive on

spanning-tree portfast

channel-group 11 mode on

!

interface GigabitEthernet1/2

description Uplink to ToOtherSwitch (Gi1/2)

switchport access vlan 665

switchport trunk native vlan 665

switchport mode access

switchport nonegotiate

ip flow ingress

ip flow egress

flowcontrol receive on

spanning-tree portfast

channel-group 11 mode on

!

interface GigabitEthernet1/3

description Uplink to ToOtherSwitch (Gi1/3)

switchport access vlan 665

switchport trunk native vlan 665

switchport mode access

switchport nonegotiate

ip flow ingress

ip flow egress

flowcontrol receive on

spanning-tree portfast

channel-group 11 mode on

!

interface GigabitEthernet1/4

description Uplink to TOOtherSwitch (Gi1/4)

switchport access vlan 665

switchport trunk native vlan 665

switchport mode access

switchport nonegotiate

ip flow ingress

ip flow egress

flowcontrol receive on

spanning-tree portfast

channel-group 11 mode on

!

interface TenGigabitEthernet1/1

shutdown

!

interface TenGigabitEthernet1/2

shutdown

!

interface Vlan1

no ip address

shutdown

......

Thanks for any advice.

rmcgurn
Level 1
Level 1

For what it's worth (I know this question has been here for a long time) I've found that using port-channels with iSCSI is not as good as using MPIO setup correctly.  iSCSI isn't too complicated, but certain prerequisites do have to be met in order for you to gain acceptable performance.  A good "non-blocking" switch is crucial.  3560X ought to be fine, 3560G, not so much (oversubscription on the ports on 3560G model). 

Another thing to keep in mind is your storage IP configuration.  Please check with your storage vendor for which method is best.  For example, Dell has 2 products, EquaLogic and MD3000 series.  Both use iSCSI, but with EquaLogic, is recommended to have all your NICs use an IP address in same subnet "iSCSI VLAN", and you put your initiator NICs on same subnet as well.  You then should keep this subnet completely isolated from the rest of your network.  For the MD3000 however, different NICs on SAN are supposed to each be in their own individual subnet, such as:

10.1.1.5

10.1.2.5

10.1.3.5

10.1.4.5

 

Many Unix/Linux based solutions operate this way also using a different Subnet/NIC.  Best thing gain, check with storage vendor for best practices for IP configuration.  

Port channels work better for NFS than iSCSI.  Also, disable unicast storm control. 

Another thing to do is enable flow-control.  Although the 3560X/3750X will only configure flow-control in "receive", they will not actually send out pause-frames.  

Hard code your switch ports and NIC interfaces

Ensure that STP is disabled, at least on iSCSI interfaces.

Beyond that, ensure your cabling is good, if you're using routed storage, meaning different subnets per NIC, then ensure you have as few hops as possible to targets.  Keeping latency low as possible is also crucial.

Hope this helps some.

Review Cisco Networking products for a $25 gift card