Question on UCS Management IP Addresses and Fabric Failover

Unanswered Question
Feb 2nd, 2010
User Badges:

We have a pair of 6120s connected to a single chassis. We have the management interfaces set up as


Fabric a = 10.1.239.2

Fabric b = 10.1.239.3

Virtual IP 10.1.239.1


The HA cluster is showing active and operational.


Issues:


We can access the system fine by browsing to 10.1.239.1

We can Ping .1 and .2

We cannot ping .3


We have re-checked addressing, masks, default gateway etc. and all seem to be in order.


If I power down Fabric A the virtual address does not come active on Fabric B.


The system continues to pass user data to the ESX servers we have configured on the blades. Just Management access seems to not failover correctly.


Anyone have any thoughts on this? We are running the latest code 1.0.(2j).

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
Robert Burns Tue, 02/02/2010 - 15:30
User Badges:
  • Cisco Employee,

+ How do you have the cluster links on the Fabric Interconnects connected?  Single, dual?


+ How long are you allowing for the failover to occur?  I've seen it take 2-3 mins for the VIP to move to the other FI.


Robert

jrhofman Tue, 02/02/2010 - 17:03
User Badges:

Thanks for the reply Robert,


Fabric Interconnects are dual cluster attached (port 0 to port 0) and (port 1 to port 1). Just a couple of 3 foot CAT 5 cables.


I've waited well over 5 minutes but still nothing. Power Fabric A back up and the virtual IP eventually comes live again.


Can you tell me if you were able to ping both management interfaces (.2 and .3 in my case) in your installs.

Robert Burns Tue, 02/02/2010 - 17:16
User Badges:
  • Cisco Employee,

Yes you should always be able to ping both the individual Fabric Interconnect Mgmt addresses as well as the VIP.


I'll run a test today and advise exactly how long it takes for the VIP to failover to the seocondary FI when the primary is failed.


**Update**  During my tests it took 1min 35sec to fail the VIP over to the secondary Fabric Interconnect.


Regards,


Robert

jrhofman Tue, 02/02/2010 - 17:52
User Badges:

Thanks Robert,


I plan on running more test again tommorow as well. I think I need to get into the CLI and have a look around.

gdragatsis Tue, 02/02/2010 - 17:59
User Badges:

Guys,


In UCSM, Admin tab, check the management interface and ensure all relevant protocols are enabled for both FI's.


Also check to see if you can ssh and http to the virtual int and the primary FI. The secondary should say http is not enabled.


Not sure if the uplinks are configured correct.


Try these and report back.


gd

Robert Burns Tue, 02/02/2010 - 17:59
User Badges:
  • Cisco Employee,

If you hit the same issues again let me know.  We'll grab your config and topology if you're still having problems.


Robert

jrhofman Wed, 02/03/2010 - 07:46
User Badges:

Well it finally dawned on me what the issue is. Because of lack of copper SFPs we only had the management interface on Fabric A connected to the network. As soon as I moved the mgmt connection from A to B then 10.1.239.3 on fabric B was pingable.


Sorry to waste everyone time on a stupid mistake. I will get both mgmts interfaces connected in an I'm sure failover will be fine.

stechamb Thu, 02/04/2010 - 00:53
User Badges:
  • Bronze, 100 points or more

No worries, "unexpected cabling" is in the Top 3 root causes of issues :-)  The most common one I find is "unexpected fibre-channel crossover" where 6120-a is connected to MDS-b :-)  That's a bugger when you are trying to zone and can't see the pWWNs...cos they're going to the wrong fabric!


Happens to the best of us! :-)

Actions

This Discussion