We are migrating from an old infrastructure which compased of 2 6500 core switches with 3570 acces switches and VLANs fro each department. We already prepared the new infrastructure which includes two Nexus 7K core switches and 9560G access switches. For testing purposes, we connected one Nexus 7K with one 6500 and created the DHCP scopes on our Microsoft DHCP server for on VLAN on the new core, all routing requirements have been configured between the new and old cores and I can reach the DHCP server from the access switch on the new Nexus 7K network. when I connect my laptop to the new access switch, I get IP address from our DHCP. On our network we are using Microsoft TMG as a proxy and we push the address of this proxy as a DHCP option on our DHCP.
When I connect to the new Nexus network I get IP address from DHCP server through the IP helper on the Nexus and I cannot get any information about the proxy and Microsoft WPAD file locations. I installed Wireshark and captured DHCP traffic on the old and new network and what I got is the following:
On the old network:
The laptop will get IP address information from DHCP server through the IP helper on the 6500 switches though DHCP Discover, Offer, Request and ACK broadcast message.
The laptop will send a broadcasted DHCP Inform message asking about the additional DHCP information like the private proxy settings.
The IP helper on the 6500 switches will contact our DHCP and get this information and send it back to my laptop as a unicasted DHCP ACK message.
On the new network:
The first two steps happens as on the old network.
The IP helper will contact our DHCP and get the additional DHCP options but at this point, the IP helper will send the DHCP ACK with a source address of the IP helper VLAN IP address and a destination IP address of 0.0.0.0 !!!!!! and I never get the DHCP information about proxy settings and location.
Topology & Design:
Two ACI fabrics
Stretching VLANs using OTV
Both fabrics are advertising BD subnets into same routing domain
Some BDs(or say VLANs) are stretched, but some are not.
Endpoints can move betwee...
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
Topology &Design:Traffic flow within same fabric:Endpoint moves to Fabric-2Bounce Entry Times OutTraffic Black-holedSummarySolutionAppendix:
In the Previous articles of ACI Automation, we are using Postman/Newman a...