cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
458
Views
0
Helpful
2
Replies

Zoning in direct attached storage environment

Mordechay Amir
Cisco Employee
Cisco Employee

Hi everibody,

I have a small humble question about zoning.

When using JBOD, each physical disk had its own WWPN and FCID. The zoning configuration is for a server WWPN against disk WWPN so basically each server has its own disk.

Now for more advance storage systems (such as EMC) instead of physical disks to be mapped to servers we are mapping LUNs per server. I thought that for each LUN a WWPN and FCID are created and then you map the server to that LUN’s WWPN. Now I see that it does not work this way.

I have a UCS system with direct attached storage (EMC VNX5100). The active zoneset shows that all servers WWPNs are mapped to the same target WWPN (which is the EMC). There is a zone entry for each server.

So basically the result of that is that all servers can access the same target and the EMC controls the isolation of the LUNs from other servers.

 

If this is correct, then my question is why do I need the zoning configuration in this scenario ? why not set the default zone to permit and let EMC control LUN access?

BTW, this scenario of one storage system attached to the SAN is the same if we use Nexus 5k, so why bother and set zoning if the EMC can control the access to LUNs?

 

2 Replies 2

Walter Dey
VIP Alumni
VIP Alumni

If this is correct, then my question is why do I need the zoning configuration in this scenario ? why not set the default zone to permit and let EMC control LUN access?

You could of course create multiple Storage Target Policies (different FC interface on controller, resp. other storage arrays)

Zoning is just another level of security, default zone permit is not recommended to be used in productive environments. The standard classical procedure is: zoning AND lun masking / mapping.

In the case of N5k, you have to do the zoning manually (CLI), or with DCNM-SAN; in the case of local zoning on the Fabric Interconnect, UCS Manager does it automatically for you. 

Hope that helps !

 

guijarro_felipe
Level 1
Level 1

Hi there,

 

Zoning is used to isolate different environments. If there were no zones other than the default one, every server would "see" the rest and that could generate problems (for example, if one HBA started failing and resetting the link, a storm of RSCNs would be be spreaded throughout the Fabric in order to update all the devices of that failure. No multiply this behavior by number of Link resets a host can generate in a minute, and then we could have a nice issue in the SAN.

Most of the storage vendors (if not all) require single-initiator zoning, because it is the best way to prevent a misbehaving device from affecting the rest of the elements in the SAN.

 

rgds