hi folks, new to the MDS9500 series directors...so i'm going to ask some basic questions.
Currently have 2 switches. I've inherited this issue, they are attached to HDS arrays. one switch shows the HDS FA's, but the other does not.
I'm assuming, it has something to do with PLOGI, but not have FM i'm poking around in the CLI and not getting very far frankly...
1. How to determine, via CLI if the 2 switches are in the same fabric.
2. If above is true, i would think the Domain ID's would need to be different.
if same fabric and domain ID's same, would this manifest itself as the 2nd switch not being visible in Device Mgr Name server?
thanks in advance..
The MDS uses VSANs, which are vritual fabrics, so it is possible to have a VSAN completely contained in a single MDS, or have that VSAN span multiple MDS. You are correct, if the VSAN spans multiple MDS, then each MDS in that VSAN will require a unique domain ID.
Commands to help you on the CLI
show vsan membership
what ports are in what VSAN number
show fcdomain domain-list vsan 100
shows all domain IDs that this switch knows about in VSAN 100 (for example). The domain with the [Local] is this switch. The one with [Principle] next to it, is the principle switch for this VSAN.
show topology vsan 100
shows what interfaces this switch is using and what interfaces the remote switch is using to cary VSAN 100 between the switches.
To view how devices register, and what the MDS has learned from the attached devices.
show fcns database vsan 100
show fcns database detail vsan 100
I use VSAN 100 as an example, yours may be different.
Hope this helps,
thanks mike...yes that helps...
as i look at these 2 switches configs, they have separate VSANs and both are principal switches...this implies 2 different fabrics, right?
I'm not sure what the folks are trying to accomplish, but i think the base config is off...
the storage controllers are connected to both MDS's, i would think to allow for dual pathing of host storage, but can that be done in separate fabrics, or better question, should it be done?
At any rate, SW1 appears to be fine, SW2 there is not even any lights light up...interfaces show disconnected "(2/17 is down (Link failure: loss of signal)"
checked to make sure polarity was correct, that looks good...light is emitting from both sides (storage controller and switch)...
I'm missing something, but cant put my finger on it...
Yes it appears to be 2 separate fabrics. I agree that it sounds like the intent was for 2 paths between host and storage, and the most reliable way is to use 2 independent director class switches, and not connect them via an ISL.
Not sure how your checking for light, but be careful. The Longwave SFPs uses lasers that are supposed to have safety circuitry to keep the lasers off when the cable is not terminated.
roger...run as 2 separate fabrics...my storage rustiness is showing ;-)...so, then the same domain ID's wont come into play in this scenario...
thanks for the safety tip...i'm being careful... :-) I'm referring to the interface light indicator on the MDS (and storage controller) are not illuminating..
so, if both the MDS and the Storage interfaces are enabled (as best i can tell), light is emitting from both fiber interfaces, there should be a plogi once the physical connection is made (and one would think the light indicator on the MDS would come on once connected...cable maybe??
When running the MDS as separate fabrics, some customer do use the same static domain ID on both. This way if someone connects a cable between them unintentionally, they will not merge due the domain ID conflict.
Yes, once the low level fibre channel initialization occurs, and the transmitters and receivers on both the storage unit and the MDS interface are in sync, the device should transmit a FLOGI to the MDS.
To view devices that have sent FLOGI, issue this command
show flogi database
if you wish to limit the output to 1 VSAN
show flogi database vsan x
again, thanks Mike,
Yea, i've ran the sho flogi db command...and it is very obvious the interfaces have not executed the flogi...
i'm stuck with why these interfaces are seeing each other...
i took a known good interface (from san 1) and connected it to the storage unit that is having the issue, and BAM, popped right up...something on SAN 2's config...
have tried new cable, no joy...I know that the storage controller will respond (plugged the other SAN 1 interface into it); I know that there is light emitting and the polarity is correct...i'm starting to run outta things...
any other thoughts??
think i got it...
i changed switchport mode to FL...they all came right up...
so, it lookes like the mode on the interfaces was what the problem was...
thanks for the posts Mike...appreciate it...
When you plugged it into the working SAN switch, did it come up as an FL port?
Fx for admin mode should mean the port would operate as F or FL. Not sure why a storage array would only operate as an FL port. FL is Fabric-Loop and is somewhat antiquated. There may be a setting on the array interface that forces it to Arbitrated Loop (FL) as opposed to auto where it should detect if the switch port can do both F (point to point) or FL (arbitrated loop).
I doubt you will see any negative impact with the MDS port operating as F versus FL, but it would be interesting to know if the same array has interfaces to the other SAN switch operating in F or FL mode.
Glad its working for you.
SAN 1: oper mode: 3 are FL; 4 are F
SAN 2: all oper mode are FL.
admin mode FX on both...
the only test i did when i connected the non-working interface with the known good fiber was that the CG0 and CG1 lights lit up...didnt actually do any name server checking or anything...i was just trying to rule out the fiber and trying to narrow it down to either the array interface or the switch interface (luckily, it turned out to be switch interface)
so, began poking around and stumbled onto the fact that the interfaces were not showing any mode, played with the FL mode and all became rainbows and unicorns ;-)
tried to reconfig to auto and got following error: "Auto/E mode is not allowed in shared rate-mode"
We have HDS storage arrays here and they use F mode when logging into the switches. I can't see anywhere you can change the Login Mode for the array. If it does live anywhere it might be on the SVP.