I'm looking at a fresh way to configure a UCS host with vSphere 5.1/5.5 host using Enterprise Plus licensing, DVS (distributed virtual switch), in combination with VMware NIOC and UCS QoS tags. Starting with vSphere 5.0 the DVS can now tag traffic with QoS markings, like the N1K. My understanding is that VCE normally presents only the two pNICS to the DVS (vice carving up the pNICs into vNICs), then uses the DVS for bandwidth control. I don't know if they normally use QoS marking, but I would imagine so.
Here's the QoS System Class config:
Plat: Disabled, CoS 5
Gold, enabled, CoS 4, 30% weight
Silver, enabled, CoS 2, 10% weight (best effort)
Bronze, enabled, CoS 1, 10% weight (best effort)
Best Effort, enabled, 20% weight
FC, CoS 3, Weight 30%
We have no real time/VoIP/video/stock trading type traffic. FC storage is used for all server VMs, NFS for VDI. On the QoS policy I set Host Control to Full (to respect the DVS QoS tags) and priority to "best effort".
What I'm trying to figure out is the best mapping of the vCenter NIOC pools, NIOC shares to the proper CoS. vCenter NIOC pools are:
Fault Tolerance (not used), 10 shares
iSCSI (not used), 20 shares
ESX Management, 5 shares
NFS (used for VDI datastores), 20 shares
vMotion, 20 shares
vSphere Replication (not used), 5 shares
So far I have:
IP storage (iSCSI/NFS) mapped to Gold (QoS 4)
Management/Replication mapped to Bronze (QoS 1)
vMotion mapped to Silver (QoS 2)
So I'm wondering about VM traffic. If I don't have the DVS mark it, will it fall into the UCS "best effort" priority bucket and thus get 20% min bandwidth? I don't see us using FT soon, but would it make sense to give that Platinum/QoS 5 priority and a small weight, like 10%? That way UCS/DVS is configured for FT, should someone decide we need it down the road.
Or should I make other adjustments to the QoS/priority mappings? Open to any real-world input on what people are doing.
Just using the VIC1240 with the 2204XP, so just 20Gb per blade. SInce this post I've modified the DVS shares to set all traffic to 'normal' (50), and just let UCS deal with traffic shaping. Seemed like a better idea than trying to do it on the DVS and UCS.
The DVS is configured for two uplinks per host. Route baserd on physical NIC load is used for each port group and both NICs are active uplinks. The only exception are the two vMotion portgroups where we have active/standby and standby/active to enable multi-nic vMotion.
The design is in production and seems to be working fine. UCS respects the CoS tagging and puts the packets in the correct QoS group.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...