cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
742
Views
0
Helpful
6
Replies

VMWare Integration

jaye15394
Level 1
Level 1

My server team has deployed several servers for VMWare use. There is one NIC dedicated for backup, one for vmotion, one for management, 2 for Prd traffic, 2 for iSCSI (netapps). The 2 for prd and 2 for iSCSI are etherchanneled together to a 3750 switch.

All seems to be working fine except one "glitch."

When I unplug both PRD cables, I can no longer ping the vmotion or iSCSI NICs, which IMO, makes no sense. I can ping them local on the same subnet, but no layer 3. I've went into the VMWare mgmt console and I have reason to believe the default gateway is screwed up, but I'm no VM expert so I'm not sure how complex the config is. Has anyone seen this? There are default gateways for svc console...vmkernel, etc.

Can someone break this down and help?

Much appreciated.

Thanks,

JayE

6 Replies 6

michaelchoo
Level 1
Level 1

I'm assuming that each Ethernet connection to your VMware servers belongs to a different subnet here (which is the scenario I've come across when working on a large bank's network).

You need routing tables on your VMware servers. You see, since your servers have multiple network connections. Assuming that your servers' default gateway point to the PRD connections, when you ping your server's vmotion connection, the server will try to respond to the ICMP packet by sending the echo-reply to its default gateway through its PRD connection, which is down, so it'll discard the packet.

What you need to do is create either multiple default gateways (one for each VMware connection/subnet). If you have a specific out-of-band management subnet where your vmotion connection will originate from, you can simply create specific static routes for that subnet on the VMware server pointing to the VMotion connection.

Thanks for the response. I'm going to describe the current networking setup within the VMWare host taken out of the VMWare Infrastructure Client.

vSwitch0 - 2 Physical NIC adapters being used.

vmnic4

vmnic2

Both of these are aggregated load balanced using IP hash and Etherchannel'd on the Cisco Switch side.

Service Console Port: vswif0: 10.1.20.172

VMKernel Port: labeled Vmotion-2: 10.1.20.173

Click on Properties:

Teaming is setup on vswitch

I then open "Service Console." IP is 10.1.20.172, which is correct. Then it has a 'grayed' out default gateway of 10.3.0.1.

I then open "VMotion-2." IP is 10.1.20.173 and default gateway is 10.1.20.1, which is correct.

vSwitch1 - 2 Physical NIC adapters being used.

vmnic5

vmnic3

Both of these are aggregated load balanced using IP hash and Etherchannel'd on the Cisco Switch side.

Service Console 2: vswif1: 10.3.0.101

VMKernel Port: labeled iSCSI: 10.3.0.100

Click on Properties:

Teaming is setup on vswitch

I then open "Service Console 2." IP is 10.3.0.101, which is correct. Then it has a 'grayed' out default gateway of 10.3.0.1.

I then open "iSCSI." IP is 10.3.0.100 and default gateway is 10.1.20.1, which is correct.

vswitch2:

Vmkernel Port:

Vmotion: 10.1.2.11

Properties: IP: 10.1.2.11 Default gateway (grayed out) is 10.1.20.1

So, obviously the routing is screwed b/c the default gateways do not make sense. I've tried changing them and I get errors.

What's the story behind service console IP addresses?

Please note I do not manage the VM environment.

niro
Level 1
Level 1

It sounds to me like you have the vmotion nic associated with the wrong virtual switch maybe. You have 7 nics total plugged into the host right? Are the two for prod traffic in an etherchannel are trunk ports to the server?

Make sure the vmotion nic is bound to a virtual switch that's not part of that prod switch, same witht he iscsi, make sure those nics have a separate virtual switch.

Also on those etherchannel ports, make sure you use the same load balancing on the vmware server as you are on the switch.

The etherchannel config works as it should.

Only problem is when both PRD cables are unplugged.

When both iSCSI are unplugged, I can still get to the PRD and Vmotion NICs.

I have had issues similar to this and the thing that cured it was the channel protocol.

Have you set the port channeld to LACP? PAgP (cisco default and proprietary) causes all kinds of wierd things when used with these NetAPP servers as they can only use LACP effectivly.

Worth throwing that in there......

Andy

Thanks. Right now the etherchannel works fine in terms of failover between links. The problem is with both links in one EC are unplugged all other links go down (layer 3). The problem is definitely the default gateways. I'm not too familiar with VMWare so I can't comment much on the vmkernel default gateway, NIC gateways, etc. The server guys are saying it's not designed that way b/c there are 4 virtual switches being used. From a networking 101 standpoint, I'd really be surprised that a NIC on one subnet can bring down another when unplugged.

Can anyone help? Maybe even offline to walk me through the VI Client?

Thanks,

Jason

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card