Heart Beat Communication Breaks while moving clustered VM to another ESX host
Let me explain the existing situation and then will come to problem description.
We do have Virtual distribution switch with 3 ESX box connected. And also we have two virtual connect (VC) , from which the etherchannels are formed between Access switches.VM's are configured in each ESX box.
The logical connection of 2 ESX to access switch is as follows.
ESX-A 1st NIC --------> VC1============>Access Switch 1
ESX-A 2nd NIC -------> VC2============>Access Switch 2
ESX-B 1st NIC --------> VC1===========>Access Switch 1
ESX-B 2nd NIC -------> VC2===========>Access Switch 2
ESX-C 1st NIC --------> VC1===========>Access Switch 1
ESX-C 2nd NIC -------> VC2===========>Access Switch 2
There are two VM's configured in one of the ESX box with clustering.The heartbeat vlan is created and is allowed on portchannels ( dot1q trunk). The 2 VM's can able to ping their heartbeat IP without any issue when they are hosting on same ESX box ( eg ESX-A) .But when the system admin move one of their VM to another ESX ( ESX-B) , the heart beat IP reachabily is breaking and hence loose the clustering between servers.
This situation is different on some times.Two VMs, one in ESX-A and second in ESX-B can reach the heartbeat IP between them without any issue. But when one VM move from ESX-B to ESX-C , they loose the heartbeat connection.
In general , the only scenario where the heartbeat never breaks is hosting the clustered VMs on same ESX box.And also an established HB connection is breaking when VM is moving between the ESX.
Actually we are in process to deploy the High avalabiliy solution and migration of VMs among the ESX is highly required.
Clearing an arp might solve this issue.But i dont want to disturb the production network , as the portchannel on access swithes are configured to allow other production vlans.
It would be really appreciated , if someone could provide any solution/workaround for this issue.
Heart Beat Communication Breaks while moving clustered VM to ano
Dear John ,
Many thanks for your attempt to help.
Actually , here they did not configure the NLB yet and we are in the intial phase of establishing heart beat communication between VMs.
Finally is the issue is solved. Since the access swithes doesnt have the port channels configured between , i need to allow the HB vlans to the etherchannel trunk that is connecting to the core/distribution layer.
This document will provide screenshots to outline the steps to setup
TACACS+ configuration to ACI and also the configuration required on
Cisco ACS server. Please find the official Cisco guide for configuring
TACACS+ Authentication to ACI:
Is it supported or NOT supported? It's a frequently asked question.
Before APIC, release 2.3(1f), transit routing was not supported within a
single L3Out profile. In APIC, release 2.3(1f) and later, you can
configure transit routing with a single L3Out pr...
Cisco Documents are usually accurate, but when it came to the document
on Cisco APIC Signature-Based Transactions it was slightly off the mark.
This document is for those novices to API like me who cant seem to
figure out how to go about performing signat...