This document defines various UCS related terms and technologies commonly found in various setups.
N-Port Virtualization Mode
Server virtualization uses virtual machine technology to prevent proliferation of physical servers in the data center. To be managed as a unique entity on the storage area network, each virtual server requires a separate address on the fabric. The N-Port virtualization feature supports independent management and increased scale of virtual machines in the data center.
N-Port ID Virtualization (NPIV) provides a means to assign multiple FCIDs to a single N_Port. It allows multiple applications to share the same Fiber Channel adapter port. Different pWWN allows access control, zoning, and port security to be implemented at the application level.
N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a "switch" to act like a server performing multiple logins through a single physical link.
Uplink Ethernet Ports
The enabled uplink Ethernet ports in the UCS 6100 Series switch are used to forward traffic to the next network layer.
Each UCS 6100 Series switch has four Ethernet uplinks to the pair of Nexus7000 switches. Each uplink (whether just as single Ethernet interface or bundled as port-channel interface) is by default configured as a 802.1Q trunk; the native VLAN for the trunk is 1.
The Fabric Interconnect uses the following port-channel load-balancing hashing algorithm, which is not configurable:
·Layer 2 packets - src-dst-mac
·Layer 3 packets - src-dst-ip
The port-channel load-balancing hashing algorithm on the Fabric Interconnect affects which member link the Fabric Interconnect picks to send out-going frames/packets on the port- channel interface; it does not affect which member link incoming frames/packets arrives on, the configured port-channel hashing algorithm on the Nexus7000 determine the incoming member link.
Uplink FC Ports
An UCS Expansion Module with 4-port 4Gb Fibre Channel (FC) is used in UCS 6100 for accessing the SAN.
By default, all FC ports are enabled when the UCS 6100 Series is powered on. There is no inherent HA designed into the UCS for FC. We must have a dual redundant fabric design and multipathing drivers (VMWare) to enable HA in the SAN design.
The UCS uplink FC ports run in NPV and NPIV modes only. It is always attached to an NPIV enabled northbound fabric switch, ex. a MDS switch should have feature NPIV enabled, otherwise the link will not come up. There is no support for directly attached storage (DAS).
Each Cisco Unified Computing System (UCS) 6100 Series switch has an open slot to add expansion modules to add Fiber Channel ports for SAN connectivity. These ports and their attributes are enabled through the SAN scope of the UCS Manager.
The storage array has active and passive interfaces, and also support the concept of trespass, which enables path failures to be seamless and has the same advantages as multipathing and failover.
These arrays have one active path at a time; the other paths are passive.
The UUID is a 128bit number (32 hex digits, 16 groups of 2 hex digits) and defines the server itself. This is not the same as the serial number of the server. It is often associated with the "mother board" of the server. There is a prefix and suffix component to this identifier. There are no rules or restrictions for this string outside of duplication, which must be checked by the CMDB database. You can enter text letters limited to ABCDEF and numbers between 0-9 for either the prefix or suffix.
The final UUID string is a combination of the prefix and suffix. A suggested method is to use a company naming convention that possibly reflects geography and roles of the blade.
It is not recommended to use HW UUID's for servers. Use UUID pools. This lends better to stateless compute and service profile migration. You can also use a UUID suffix pool. Cisco UCS Manager automatically generates a unique prefix so that you are guaranteed a unique UUID for each logical server.
This is the well-known hardware address of the NIC on the system.
The recommendation is to create two different pools of MAC addresses one for each fabric interconnect (A and B). These pools can later be used to feed the two vNIC templates that one could create for each fabric.
Cisco pre-populates a critical part of the OUT, which is registered to Cisco. It is recommended to use a convention that makes the fabric delineation obvious (A, B).
WWNN and WWPN
The node name uniquely identifies a communicating object in Fibre channel fabric. This is the parent object for the end ports that send and receive FC frames. UCS assigns a unique to the Converged Network Adapter (CNA) itself. The best practice for both WWNN and WWPN is to keep the first octet as "20" or "21" thus properly identifying the host as an initiator in the FC SAN.
Each port of the CNA gets assigned a WWPN to allow its unique identification on the fabric. UCS CNAs are dual ported so it is recommended to create two pools of WWPNs for each fabric. This allows easy identification, which WWPN is engineered to which fabric, in steady state operations.
A WWNN pool is one of two pools used by the Fibre Channel vHBAs in the UCS. You create separate pools for WW node names assigned to the server and WW port names assigned to the vHBA. The purpose of this pool is to assign WWNNs to servers (World Wide Node Name). If a pool of WWNNs is included in a service profile, the associated server is assigned a WWNN from that pool.
The use of the Cisco OUI (25:B5) is needed because without it, the MDS switch rejects the WWNN/ WWPN. In UCS release 1.0(2d) and above, the Cisco OUI been populated to help alleviate errors.
A WWPN is the second type of pool used by the Fibre Channel vHBAs in the UCS. The purpose of this pool is to assign WWPNs to the vHBAs (World Wide Port Name). If a pool of WWPNs is included in a service profile, the associated server is assigned a WWPN from that pool.
The use of the Cisco OUI (25:B5) is needed because without it, the MDS switch rejects the WWNN/ WWPN. In UCS release 1.0(2d), the Cisco OUI been populated to help alleviate errors.
Boot order and boot policies are configurable from within the server tabs policy scope. This policy determines the following:
Boot device configuration
Location from which the server boots
The order in which boot devices are invoked
Each UCS service profile contains two host bus adapter (HBA) interfaces, which have primary and secondary bootable SAN groups. We define two different port groups in each group. This resilient configuration guards against path failures.
The management policy groups all the way to get remote access to the blade. It includes IPMI, SOL and KVM consol.
For the purpose of the pilot, we will use only the KVM consol and SOL access. A pool of 11 IP addresses will be created to allow those accesses. These addresses must be in the same VLAN of the management interface (mgmt0) of the fabric interconnect.
SOL access will required also a tty running at the OS level. ESXi hypervisor should have this tty enabled by default.
Network Control Policy
The UCS network control policy configures the network control settings for the Cisco UCS instance, including the following:
Whether the Cisco Discovery Protocol (CDP) is enabled or disabled
How the VIF behaves if no uplink port is available in end-host mode
Whether the server can use different MAC addresses when sending packets to the fabric interconnect
HCS uses the network control policy to enable CDP. The CDP policy is added in the service template and updated for each vNIC on every host, that way we can view the hosts via CDP on the N1Kv.