cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
9843
Views
15
Helpful
14
Replies

OOB mgmt, inband mgmt and infrastructure best practice

apache_le
Level 1
Level 1

Hi all,

I am looking for best practice concerning out-of-band mgmt, inband mgmt, and infrastructure mgmt (Vcenter, ESxi mgmt, ACS, Prime, etc..). 

For out-of-band and inband mgmt:

.  Do you connect oob connections to a separate external switch such as 2960 or directly into the fabric?

.  Do you usually configure both inband and out-of-band mgmt or either?

For the infrastructure like Vcenter, ESxi mgmt, ACS appliances, do you connect these to an external switches which has L3 connection into the fabric or directly into the fabric.  If directly into the fabric, do you place these services into the default "mgmt" tenant?  

Thanks.

14 Replies 14

Philip D'Ath
VIP Alumni
VIP Alumni

This is a question with a lot of right answers.  It depends hugely on your security posture, perceived risks, and role separation of teams inside of your company.

For example, if you have one IT team, where everyone does everything (so their is no role separation), and a single site, then out of band management seems overkill.  Simple in-band management would be preferred.  Your IT team just want to get on the kit and do their job.

Lets say you have multiple dark data centres.  You are probably more interested in OOB, so that when something goes badly wrong you can still get in via an independent network and fix the issue.

Lets say you have a large IT team, with role separation.  You have a team of ESX engineers.  A team of switching engineers.  A security team.  A storage team.  You are going to want multiple separate management networks not connected to the primary routing plane.

Hi d.path,

Thanks for your response.  So I understand that in the case of a large IT team with role separation, the best is to have to OOB connections in a separate external switch with firewall.  That way, via firewall, I can control what individual team can manage.  What happen if because of cost, the external switch and firewall is not an option, how can you achieve the same objective?  I assume the OOB connections will be connected to the fabric?  Can you elaborate on "multiple separate management networks not connected to the primary routing plane"?  Thanks.

If there is a large team then the cost of an extra switch would be equal to 1 days pay of the large IT team.  I would be discussing the relative costs with the person making a purchasing decision.  It would be silly to skimp no this area for what is a trivial cost for a company of this size.

Failing that, yes, just create an extra VLAN and put all of the management into that.  However it is now only partially out of band.

Multiple networks.  Lets say you have 1000 physical servers running VMWare and a team of VMWare specialists running all sorts of VMWare products.

Lets say you have 100 physical firewalls and a team of firewall specialists running that.

Is there really any need for the firewall management team and VMWare management team to share the management plane?  Their is a far greater chance a human accident in one team will affect the other teams service.

Got it. Another question if you don't mind. In which tenant would you place Vcenter, ESxi mgmt, Cisco ACS, etc.. in?  

I don't understand.  Could you perhaps re-phrase the question?

By default, ACI has 3 tenants:  common, mgmt, and infrastructure.  Let's say customer wants to put all of his servers in a single tenant called "Production".  Do you put Vcenter, ESxi mgmt (vmkernel), AAA server, etc.. in "Production" tenant or "mgmt" tenant?

Ramu Gajula
Level 1
Level 1

Hi Apache,

Adding to p.dath, When you are using  OOB mgmt , you have to make sure that Vcenter is reachable via OOB mgmt network of the fabric. 

and yes, the OOB ports of  APIC controllers, Spines, Switches, Vmware hosts can be connected to a   external management switch.  

Vcenter access through OOB mgmt is not recommended in some cases by VMWARE.

In our case we are not running vcenter appliance by itself, the vcenter is a guest machine on any vcenter data center host.

If you would like to build oob mgmt from APIC to the vcenter, the vcenter guest machine need either to be pinned to a specific host so that you can use extra NIC for vcenter guest, or you have to provide extra NIC on all ESXi hosts to make vcenter guest can be motivated as other normal guests. Also the vcenter need to talk to the ESXi hypervisor managemnet network, this is normally an inband network, in this senario fabric need to build a connectivity between OOB and inband network.

In our deployment the inband management EPG under mgmt tennat (including APIC server inband mgmt interface) is doing contract import/export with VM tenant for APIC/vcenter connecitivty but the inband mgmt EPG will keep isloated from external network, all other communication with fabric infrastructure nodes are done by OOB on the rack top OOB management switch.

-- Best Regards

So you configured both inband and OOB in your fabric?  I had issue during one of my deployment and Cisco actually recommended using OOB only.  So using OOB is the only way APICs communicate with Vcenter VM.

We've validated the OOB/inb built with our vcenter in the lab, it works. But be careful not to overlap you inband management network and OOB network.

Also in our case, the inband (APIC) server IP is only used for vcenter communication and not accessible from outside of fabric, I do not see what the problem is.

There are some tricks around the APIC server local route preference and default fabric infra IP range, if you carefully stay away with these mines, inband management with vcenter works well. you may lose connectivity in some case between each other, and the recovery procedure takes extra steps, but it is acceptable in our case.

-- Best Regards

We had some issues when using an NTP server which was located in the Tenant.

We've configured both in- and out of band connectivity.

Out of band is connected to a physical switch so we can reach the apic's and leaf/spine's in case of emergency.

However TAC advised us to only use OOB:

When you mix both in-band and OOB, in-band is always preferred and you can run into issues where the routing table is not using the preferred method.  In this case, it seems best that we remove the in-band config and rely on OOB for all communication.

Can you clarify what TAC meant by their advise? I understand that in-band is always preferred, but the quote above is ambiguous. Were they referring to a failure of the software to use in-band as the preferred path or an inability to manage the system via OOB when they said "...you can run into issues where the routing table is not using the preferred method". In other words, what are they referring to when they say "preferred method"?

We are designing our system to have both OOB and INB. We have a dedicated (emergency, non-routed) OOB environment to which we will be connecting the OOB interfaces, but we plan on using INB for routine tasks. We have been proceeding on the assumption that the OOB interface will respond to traffic on the directly connected subnet but that the the INB interface will respond to requests from all other sources. I'm very curious to know if actual behavior will not match this expectation.

The issue is with the APIC's. The leaf's and spine's can handle in/outbound mgmt traffic. But the APIC has an issue with for example NTP. We configured OOB NTP for the APIC but it still tried INB. 

You are correct about traffic being iffy when having both inband and OOB. I have 2 deployments recently with this issue and TAC advised to use OOB only.  I was not actively involved so I did not know the root cause. It's supposed to work.  I am very interested to know the root cause.  

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: