cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
3422
Views
0
Helpful
8
Replies

Question on UCS Management IP Addresses and Fabric Failover

jrhofman
Level 1
Level 1

We have a pair of 6120s connected to a single chassis. We have the management interfaces set up as

Fabric a = 10.1.239.2

Fabric b = 10.1.239.3

Virtual IP 10.1.239.1

The HA cluster is showing active and operational.

Issues:

We can access the system fine by browsing to 10.1.239.1

We can Ping .1 and .2

We cannot ping .3

We have re-checked addressing, masks, default gateway etc. and all seem to be in order.

If I power down Fabric A the virtual address does not come active on Fabric B.

The system continues to pass user data to the ESX servers we have configured on the blades. Just Management access seems to not failover correctly.

Anyone have any thoughts on this? We are running the latest code 1.0.(2j).

8 Replies 8

Robert Burns
Cisco Employee
Cisco Employee

+ How do you have the cluster links on the Fabric Interconnects connected?  Single, dual?

+ How long are you allowing for the failover to occur?  I've seen it take 2-3 mins for the VIP to move to the other FI.

Robert

Thanks for the reply Robert,

Fabric Interconnects are dual cluster attached (port 0 to port 0) and (port 1 to port 1). Just a couple of 3 foot CAT 5 cables.

I've waited well over 5 minutes but still nothing. Power Fabric A back up and the virtual IP eventually comes live again.

Can you tell me if you were able to ping both management interfaces (.2 and .3 in my case) in your installs.

Yes you should always be able to ping both the individual Fabric Interconnect Mgmt addresses as well as the VIP.

I'll run a test today and advise exactly how long it takes for the VIP to failover to the seocondary FI when the primary is failed.

**Update**  During my tests it took 1min 35sec to fail the VIP over to the secondary Fabric Interconnect.

Regards,

Robert

jrhofman
Level 1
Level 1

Thanks Robert,

I plan on running more test again tommorow as well. I think I need to get into the CLI and have a look around.

Guys,

In UCSM, Admin tab, check the management interface and ensure all relevant protocols are enabled for both FI's.

Also check to see if you can ssh and http to the virtual int and the primary FI. The secondary should say http is not enabled.

Not sure if the uplinks are configured correct.

Try these and report back.

gd

If you hit the same issues again let me know.  We'll grab your config and topology if you're still having problems.

Robert

Well it finally dawned on me what the issue is. Because of lack of copper SFPs we only had the management interface on Fabric A connected to the network. As soon as I moved the mgmt connection from A to B then 10.1.239.3 on fabric B was pingable.

Sorry to waste everyone time on a stupid mistake. I will get both mgmts interfaces connected in an I'm sure failover will be fine.

No worries, "unexpected cabling" is in the Top 3 root causes of issues :-)  The most common one I find is "unexpected fibre-channel crossover" where 6120-a is connected to MDS-b :-)  That's a bugger when you are trying to zone and can't see the pWWNs...cos they're going to the wrong fabric!

Happens to the best of us! :-)

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: