Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

IVR hosts having difficulty logging into CLARiiON

Hello,  I am having a problem with host hbas logging into a CLARiiON array via IVR.  The hbas are zoned to 2 sp ports (spa and spb).  Usually only one SP port gets logged into successfully. This is very strange.  Sometimes it works fine.  Then I start trying to get the others working and the working one goes away.

Simple IVR topology.  Site 1, 3 switch fabric (vsan 250) connected via GE FCiP to site 2, 1 switch fabric (vsan 100).  Transit vsan is 1000.  The IVR zoneset is consistant on all switches.  All IVR zones are injected into the active running zones on the switches and show as active.  All fcns entries are correct (so I think).  Meaning sh fcns data vsan 100, 1000 and 250 all show the hbas N ports and storage ports logged in.  No errors anywwhere. We have an extremely robust site to site connection (4G fcip per fabric A and B).  2ms ping times, RTT 270us consistantly (dark fiber).  We are getting about 600MB/sec throughput with no errors.

I'm pretty much on my own.  I have opened a case with the vendor... but they are not making any head way... sh tech-supports have been sent in etc etc.


Any idea what might casue an initiator to only log into one target?  I have tried adding all the targets and hba in one zone... didn't help.



New Member

Re: IVR hosts having difficulty logging into CLARiiON

Hopefully this will help someone.  I figured this problem out myself.  Apparently this is a known issue leveraging FCiP WA (write-

acceleration).  If an hba has multiple paths to a target, and multiple equal cost paths over FCiP tunnels WA can cause issues.  I disabled WA and everything started working fine.

Our site to site configuration is across 4 GigE links per fabirc channeled together in 2 port channels, equal cost.  We are migrating our ESX clusters to a new data center via SVmotion (esx storage vmotion) across these links.  Not leveraging WA seems to of really slowed things down, but, at least the environment is stable.