cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1038
Views
9
Helpful
11
Replies

IVR for traversing FC & iSCSI VSANs

jduchnowski
Level 1
Level 1

An EMC CX700 currently has a VSAN for FC host traffic to 4 ports on a CX700, and another VSAN for iSCSI host traffic to 2 different ports on the same CX700. The switch is an MDS 9509 running 2.0.3 code and the required license to run IVR.

The optimal configuration would be to keep iSCSI and FC in separate VSANs yet access the same storage ports. Since a port is limited to 1 VSAN, I would like to implement IVR to allow the iSCSI hosts to access the same storage ports as the FC hosts. If this is implemented what impact would it have on the production FC VSAN if any, and what are any other concerns I should be aware of if implementing this configuration? A showtech.txt of the current switch config is available if needed.

1 Accepted Solution

Accepted Solutions

tblancha
Cisco Employee
Cisco Employee

I don't see the value of IVR in your scenerio. If you are going to have the iscsi initiators in a vsan but transit to another vsan to access storage than why not use regular zoning? There is no functional difference to the end devices for IVR versus having everything in the same vsan. You are not saving any traffic or overhead by using IVR here. In fact, you would simply double the domains and introduce FSPF routing and add to the complexity of the config with no benefit other than getting the initiators in a different vsan. But really, the iscsi initiators are not isolated to that vsan since you would be allowing them to traverse another vsan. You can't isolate them via a different vsan and then provide connectivity to them so they are not isolated anymore.

View solution in original post

11 Replies 11

tblancha
Cisco Employee
Cisco Employee

I don't see the value of IVR in your scenerio. If you are going to have the iscsi initiators in a vsan but transit to another vsan to access storage than why not use regular zoning? There is no functional difference to the end devices for IVR versus having everything in the same vsan. You are not saving any traffic or overhead by using IVR here. In fact, you would simply double the domains and introduce FSPF routing and add to the complexity of the config with no benefit other than getting the initiators in a different vsan. But really, the iscsi initiators are not isolated to that vsan since you would be allowing them to traverse another vsan. You can't isolate them via a different vsan and then provide connectivity to them so they are not isolated anymore.

The idea is to remain in an EMC supported state. Per the EMC iSCSI SAN Practitioners Guide:

"The network should be dedicated solely to the iSCSI configuration. For performance reasons EMC

recommends that no traffic apart from iSCSI traffic should be carried over it. If using MDS switches, EMC

recommends creating a dedicated VSAN for all iSCSI traffic."

I'm still researching to see if using IVR in this scenario would put the config in an unsupported state. Thank you for the insight and I see your point, however there must be a reason why EMC recommends creating separate VSANs for iSCSI..

One of the key advantages to VSANs is the separation of fabric services. Using IVR, he would be isolating almost all FC Control traffic to its own VSAN, only allowing what is absolutely necessary for IVR (Name Server info only for configured nodes, RSCNs only for configured nodes, and FSPF only for relevant domains).

This keeps fabric problems separate and would make troubleshooting easier, even if the configuration is a bit more complex and requires FSPF.

Which fabric problems? RSCN propogation would be the same, nameserver behavior is the same, and flogi behavior is the same. Zoning is harder because it done with IVR_zoneset instead of just regular zoneset meaning it has to lock the two regular zonesets and then insert the contents of the IVR zoneset into the regular/vsan-level zoneset. FLOGI, FC2, nameserver all run each one process. There is not a process for each vsan, so if one of these processes crashed, it would take out that function for all vsans-so there is no value there. In this situation, you have to worry about overlapping domains if you don't make them static.

So, you really don't have any separation of fabric services if you are effectively making it a any-to-any connectivity. There is no isolation in putting the iscsi initiators into a vsan but still providing connectivity to all of the FC targets. Instead, this reverses the isolation concepts b/c you completely exposed the storage.

Excellent points...thanks for the explanation.

However, if he used IVR, FSPF and FCNS databases would remain separate and would not be merged. Keeping them separate seems like a good idea, especially with the potential for large numbers of low-cost iSCSI nodes.

Likewise - BF, RCF, FCS, zoneset distribution traffic, non-IVR-related SW-RSCN traffic, non-IVR-related FSPF traffic, and other control traffic (pricipal switch selection) would be isolated.

Your last sentence seems to asssume that these targets are the only targets in the SAN. By using IVR here, he would be providing fabric-level isolation from other existing targets (tape devices or other disk subsystems). Further, even though the FLOGI, FC2, and Nameserver services each run only one process each, the system is designed to be fair between different VSANs. So if a FC HBA went bad and started doing a thousand FLOGIs per second or something crazy like that, the other VSANs would not be affected while nodes in the same VSAN might be. So while there is no value if the process crashes, there is value in the case of process monopolization.

Or, what if the SAN is a core-edge design with ISLs? If everything were in the same VSAN, a flapping ISL would cause fabric reconfigurations (RCF), which would affect his iSCSI nodes (and, admittedly, his storage too).

So my question is, why not have 3 VSANs? One for targets, one for FC initiators, and one for iSCSI initiators... Thanks for taking the time to explain this stuff - this is a very interesting topic to me.

Thank you for all the responses I agree this is very interesting to me as well.

j.sargent- this is a single core design, the only other switch involved in this configuration is an MDS 9216 that the 9509 has an FCIP relationship with at a remote site connected to an EMC CX500.

...Also I realize the need for static domain ID's when using the IVR solution and it is configured as such.

My question (remaining unanswered from my original post)..Since the current configuration is the 2 separate VSANs connected to 2 different sets of storage ports, what impact (if any) would this have on their production FC hosts if I implemented the IVR solution to have the iSCSI hosts access the same storage ports? The main goal is to make more efficient use of the storage ports they have. Will the hosts hiccup or panic? Thanks in advance.

The core answer is there would be no difference to the end devices. They don't know what the vsan construct is-they only know fibre channel. The FC frames would be the same to the storage and iscsi initiators.

Regarding the flapping ISL, you would only get a RCF if the iscsi or FC-target vsans were carried over the ISL and there was a problem with who is principal switch. Over a FCIP or long-haul ISL, you will want to use IVR to provide that isolation. In the case of fcip or distant fibre, the IVR does provide the isolation you need. Most of the time, the FCIP is only going to be used for an array to array replication, IE 1-1. The initiators rarely need access to stg at the remote end, IE not an any-any situation. IVR over a distance TE link does truly provide the isolation everyone is looking for.

But, on one switch, it does not. In a single vsan, you will not have an RCF. Moreover, in a dual or triple vsan solution the fcns service/process in the switch will do the same frames to the initiators and stg as if they are in the same vsan. The initiators log in and query the NS just same as if they are in same vsan. But now the IVR process has to determine if not only are they zoned together, but provide the additional field of which vsan. In the proposed solution, the iscsi initiator X still has to be zoned with FC attached storage Y whether X is in same vsan as Y or not---It's just that the zoning is done with the IVR zoneset instead of regular zoneset. So, the interaction with the nameserver is same as in same vsan or not. It's just that the config is much more complex, you start up FSPF between the domains, and now require the Enterprise license.

Zoneset distribution traffic is increased because when you activate the IVR zoneset it will send stage-fabric-config to all of the domains that are in the ivr vsan topolgy on that switch and then it has to copy and insert the IVR zones into the regular zones. In this case, that would be 2 vsans and the associated regular zonesets. So, if X in vsan1 and target Y is in vsan2, then the zoneXY with these 2 members gets copied into vsan1 and vsan2. So, you have doubled ZS propogation and now you have 3 zones---> the IVR_zoneXY in vsan1, the IVR_zoneXY in vsan2, and the zoneXY in the IVR zoneset.

A malfunctioning FLOGI HBA will require the same HW/SW resources no matter what vsan it's in. So, it doesn't matter if that HBA is in vsan X or Y, since the process is the same, the fairness algorithm will allocate resources to it. Any malfunctioning RCSN behavior will affect the devices that are zoned with that malfunctioning/rogue device the same if in same vsan or not.

I think you just answered my question the minute after I posted it. Thank you very helpful everyone!

Thanks again for the excellent explanation. I was not aware that initiators in one VSAN would have to log in to another VSAN if using IVR. Guess I wasn't thinking that all the way through...

My point with the zoneset distribution traffic being decreased was really meant for multi-switch implementations, where if you only had one VSAN then the entire zoneset would be distributed to all switches in the domain. If you had multiple VSANs you could limit the cross-domain traffic to only the IVR zonesets. Am I on track there?

Can you clarify a bit more on the fairness algorithm? I understand that a malfunctioning HBA will require the same amount of resources no matter which VSAN it is in - my question was whether the fairness algorithm provides assured resources to other members in the same VSAN just as it provides assured resources to members of other VSANs. I haven't been able to get a clear picture of exactly how the fairness algorithm works...is it documented anywhere?

Thanks again for your time on this. This has been, hands down, the most informative topic in the Storage Networking forum yet!

mrfrase
Level 1
Level 1

For iSCSI host you will not want to use IVR because iSCSI initiators can be configured into more then 1 VSAN. See VSAN membership in the configuration. Make sure you enable the iscsi-interface-vsan-membership feature. After that, no concerns, the iSCSI initiator will be present in each VSAN you selected, zone, and your on your way.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: