cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1363
Views
0
Helpful
4
Replies

Isolated PVLAN w/Promisuous North

utahbmxer
Level 1
Level 1

Hi

I am super frustrated with our FlexPod and Cisco at the moment so I am reaching out, hopefully some expert can spot the issue.  I have had a TAC open with them for almost 2 months now and it seems no one knows much about this stuff, but they tell me it works.

UCS 3.1(1h)

B200 blades running VMware 5.5U3.  Added two new vNIC for this PVLAN stuff, one to each fabric.  Trying to create an Isolated PVLAN that I can attach a distributed portgroup to, so VMs can have a 2nd NIC in this isolated-pvlan to talk to a backup server.

Backup server on UCS managed C240 rack server with additional Intel X520-2 10Gb NIC, connected to the Nexus 5548 switches as promiscuous, north of the UCS 6296 FIs.

It almost works when I don't do any tagging on the vmware distributed switch (isolated vlan is native on vNIC), but as soon as two VMs are on the same host they are able to talk to each other which can't happen since they will be from different customers when we get this working.  I assume this is the VMware kernel switching them rather than sending them to the wire.

I have attached a diagram showing the topology along with a few parts of the configs.  Any ideas would be nice, and thanks in advance!

4 Replies 4

Walter Dey
VIP Alumni
VIP Alumni

Hi

As far as I see from you diagram:

...Backup server on UCS managed C240 rack server with additional Intel X520-2 10Gb NIC, connected to the Nexus 5548 switches as promiscuous, north of the UCS 6296 FIs....

This server is NOT UCS managed, if connected to N5K ! you would have to connect it directly to the FI (or a FEX connected to FI).

Sorry, I did not do a good job explaining the connections.   It has been UCS managed for 2 years.  We recently just added in a Intel NIC (part# N2XX-AIPCI01, as recommended by Cisco TAC) to this server and connected that adapter to the 5548 switches so we could deal with a promiscuous port that was north of the UCS domain, so to speak.

Right now, the C240 server is dual homed.  It has the vNICs (from the VICs) that are running management traffic, and now we are trying to make the new Intel NICs (Server 2012 R2 NIC Teaming) so that it is attached to the private-vlan.

I should mention our 6296 pair is running in end-host mode.

The Intel X520 is a supported adaptor with UCSM integration, but I'm not sure why Cisco TAC stated you could dual home it with a Nexus. When it is stated as supported the Adapter must be connected to the FIs or FEXs. Your Cisco VIC support both Data and Management versus non-Cisco cards are data only. When using non-Cisco CNA cards the 1Gb LOM ports from the C240 are connected to the FIs along with the 10GbE Intel X520.

 

I know that is not what you've done, you had the VIC originally and TAC for some reason said you could dual home it. I don't know why they stated this because its an unsupported setup. When integrated into UCSM all connections must go to the FIs.

Agreed with the comment above, that you should not have a UCSM managed NIC card, for a integrated server, not connected to FIs. However unsure if Intel X520 is UCSM managed.

 

The original issue does not seem with setup, but as you already identified, with VMware kernel switching which seems to be allowing isolated pvlan ports to communicate. When the VMs are on different hosts the traffic goes to upstream switch (from DvS perspective) and, as you said, it works in this case.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: