Activation failed : reason Fabric changing domain xx

Unanswered Question
Feb 17th, 2010

We can't activate our datacenter vsan zoneset.  The running zoneset is fine but we're trying to zone a new host to storage. We get

ZONE-2-ZS_CHANGE_ACTIVATION_FAILED: %$VSAN 11%$ Activation failed


%ZONE-2-ZS_CHANGE_ACTIVATION_FAILED_RESN_DOM: %$VSAN 11%$ Activation failed : reason Fabric changing domain 17


snmpd: SNMPD_SYSLOG_CONFIG_I: Configuration update from 4166_192.168.163.1 user/community name : admin


%ZONE-2-ZS_CHANGE_ACA_FAILED: %$VSAN 11%$ ACA failed : domain 0x11 returns FABRIC_CHANGING


%ZONE-2-ZS_CHANGE_ACTIVATION_FAILED: %$VSAN 11%$ Activation failed

We're on SAN-OS 3.31(c)

We have one core and 3 edge switches and it seems to be hung up on the edge switch that is domain id 17 for this vsan.  Fabric 2 is configured the same and it's working.

sho zone internal change event-history vsan 11 shows

1) Transition at Sun Feb  7 07:20:47 2010
    Prev State: [Idle]
    Trig event: [FABRIC_CHANGE] (Dom:17 Up)
    Next State: [Idle]
2) Transition at Wed Feb 17 14:22:34 2010
    Prev State: [Idle]
    Trig event: [REQ_CHANGE] (Activate)
    Next State: [Get Auth]
3) Transition at Wed Feb 17 14:22:34 2010
    Prev State: [Get Auth]
    Trig event: [RCVD_RJT/FAIL] (Dom:17) <<< this guy right here. what does that mean?
    Next State: [Release Auth]
4) Transition at Wed Feb 17 14:22:34 2010
    Prev State: [Release Auth]
    Trig event: [RCVD_ACC] (Dom:17)
    Next State: [Release Auth]
5) Transition at Wed Feb 17 14:22:34 2010
    Prev State: [Release Auth]
    Trig event: [RCVD_ACC] (Dom:13)
    Next State: [Release Auth]
6) Transition at Wed Feb 17 14:22:34 2010
    Prev State: [Release Auth]
    Trig event: [RCVD_ACC] (Dom:15)
    Next State: [Release Auth]
7) Transition at Wed Feb 17 14:22:34 2010
    Prev State: [Release Auth]
    Trig event: [ALL_ACC]
    Next State: [Idle]

We have a call open with the TAC but we thought we ask here as well.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
mattkauffmann Thu, 02/18/2010 - 13:33

Apparently this is a bug in 3.3x.  We have stale domain id's floating around on our switches.  This isn't a problem until the switches disagree i.e. one switch has a stale domain id and the other doesn't.  Cisco has to run a utility from your bootflash: partition and clean up the stale domain id's.  Nice.

Actions

This Discussion

 

 

Trending Topics: Storage Networking