There are many ways to provide resiliency, and teaming (also called bonding in the Linux world) is one of the more common. Within teaming there are many configuration options, but as to what works best, if the server connects to two separate switches, an Active/Standby form of teaming is the norm (in my experience). You can do Active/Active in this case, but it will use proprietary algorithms that may have unexpected interactions with the network, possibly less than optimal load balancing and results in a more difficult to troubleshoot solution (with proprietary Active/Active teaming, under certain conditions, if you loose one side of the network, the teaming may not sense this, and you will often get what looks like an intermittent issue (more difficult to troubleshoot) since you have no control over what session is using what NIC in the team at any given time with this form of Active/Active).
If you connect both NICs to a single/common upstream switch, you can take advantage of IEEE 802.3ad link aggregation teaming (or sometimes static EtherChannel, depending on the teaming software). This provides an industry standard form of Active/Active teaming and is not as subject to the issues with proprietary A/A. Of course you now have to configure the upstream switch to support 802.3ad (proprietary A/A does not require any special switch configuration), and you also have a single point of failure in the single upstream switch.
Using an upstream switch with redundant sups and power supplies, and spreading the NIC connections across different line cards can help mitigate the single point of failure issue with EtherChannel based teaming.
Another option is to connect each NIC to multiple switches that look and feel like a single logical switch (such as a pair of 3750E's stacked together, or a pair of 6500's running in VSS mode). In these cases, you can still use 802.3ad teaming and connect the NICs to physically separate switches, removing the single point of failure altogether.
Beyond teaming, there are other methods to accommodate server redundancy, such as strategically placed load balancers that feed requests to the servers, and if a server goes off line, send the requests to other servers to service the requests. Along the same lines there are also many forms of clustering that also provide server HA, none of which, requires NIC teaming to provide the HA.
On a more primitive level, there are also devices that will let you take a single NIC, plug it into to a special device that then connects to two separate switches. In the event that the connection to one of the switches goes down, the device will switch the connection to the other operational connection. Naturally this device, along with the NIC, constitute single points of failure, but the network itself at least would be redundant (have not seen one of these lately, but I'm sure they're still out there).
If a rapid failover is not necessary, and you don't want to mess with the complexities of teaming or other solutions, some users will simple have two NICs, but leave one shut down until such time as one of the connections goes down, and then they will manually go into the OS and bring up the other NIC, using the same IP address.
So there are a lot of options, and the above are just some examples, with the decision to use one over the other requiring knowledge of what is available and what the requirements are for the applications running on the servers.
Why do you need native HA: The native HA feature allows two Cisco DCNM
appliances to run as active and standby applications, with their
embedded databases synchronized in real time. Therefore, when the active
DCNM is not functioning, the standby DCNM will...
This document will provide screenshots to outline the steps to setup
TACACS+ configuration to ACI and also the configuration required on
Cisco ACS server. Please find the official Cisco guide for configuring
TACACS+ Authentication to ACI:
Is it supported or NOT supported? It's a frequently asked question.
Before APIC, release 2.3(1f), transit routing was not supported within a
single L3Out profile. In APIC, release 2.3(1f) and later, you can
configure transit routing with a single L3Out pr...