cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
7025
Views
0
Helpful
30
Replies

mtu 1500 shows in vcenter but service profile template is at 9000

Dragomir
Level 1
Level 1

I have 2 iscsi nics configured on my service profile and they are set to 9000

but in vcenter it is still showing the nics at 1500mtu

 

any idea?

30 Replies 30

Manuel Velasco
Cisco Employee
Cisco Employee

Hi Tony,

 

Did you set MTU size to 9000 on vcenter switch? (see below)

 

 

 

yes

Where in UCS did you set the MTU size?

 

Also if you SSH to one of the ESXi host  and run the following command, what do you see?

 

esxcfg-vmknic -l

 

you can also check the mtu size on the UCS to verify it is set to 9000 (see screenshot)

mtu was set as part of the vnic template for those iscsi nics 

 

 

esxcfg-vmknic-ls on those iscsi nics are showing 9000 mtu

 

vmkping -s 9000 to my iscsi storage appliance replies fine

in cdp this is showing as 1500mtu on the iscsi nic in vcenter on the fabric interconnect

port id

vethernet2737

 

 

on the FI

 

UCSA-A(nxos)# sh run interface vethernet 2737

!Command: show running-config interface Vethernet2737
!Time: Mon Mar 17 15:29:04 2014

version 5.0(3)N2(2.11a)

interface Vethernet2737
  description server 1/2, VNIC vmnic6_iscsi_A
  switchport mode trunk
  untagged cos 5
  no pinning server sticky
  pinning server pinning-failure link-down
  switchport trunk native vlan 60
  switchport trunk allowed vlan 60
  bind interface port-channel1298 channel 2737
  service-policy type queuing input org-root/ep-qos-xxxxxxxxxxxxxxxxxxxxx
  no shutdown

Hey Tony,

 

Looks like issue is just cosmetic from the vCenter side, but  to make sure we are not  fragmenting , try to add the -d to the vmkping command see below

 

vmking -d –s 9000

/var/log # vmkping -d -s 9000 10.60.1.106
PING 10.60.1.106 (10.60.1.106): 9000 data bytes
sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)

Can you also try vmkping -d -s 8000 10.60.1.106?

I am getting the same error

 

sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)

Looks like there is configuration issue with jumbo frame along the path from your server to the storage array. Make sure you configure jumbo frame end to end. See below for UCS configuration example http://ucstech.blogspot.com/2011/06/qos-on-ucs.html

from my host, I am able tp vmkping -s 9000 to the storage device and responds fine

 

but the -d is failing

 

what should that mean?

i also verified my qos policy on my iscsi nics are set at platinum 

 

CoS 5

weight 10

mtu 9216

The vmkernel ping must be used with 8972 not 9000; 28 bytes is the protocol overhead of IP and ICMP, see eg.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003728

http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/ucs_vspex_250vm.html#wp700362

Jumbo MTU Validation and Diagnostics
To validate the jumbo MTU from end-to-end, SSH to the ESXi host. By default, SSH access is disabled to ESXi hosts. Enable SSH to ESXi host by editing hosts' security profile under "Configuration" tab.
When connected to the ESXi host through SSH, initiate ping to the NFS storage server with large MTU size and set the "Do Not Fragment" bit of IP packet to 1. Use the vmkping command as shown in the example:
Example 5
~ # vmkping -d -s 8972 10.10.40.6411
PING 10.10.40.64 (10.10.40.64): 8972 data bytes
8980 bytes from 10.10.40.64: icmp_seq=0 ttl=64 time=0.417 ms
8980 bytes from 10.10.40.64: icmp_seq=1 ttl=64 time=0.518 ms
8980 bytes from 10.10.40.64: icmp_seq=2 ttl=64 time=0.392 ms
--- 10.10.40.64 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.392/0.442/0.518 ms
~ #
Ensure that the packet size is 8972 due to various L2/L3 overhead. Also ping all other hosts' vMotion and NFS vmkernel interfaces. Ping must be successful. If ping is not successful verify that 9000 MTU configured. Follow these steps to verify:
1.  9000 MTU on the NFS share IP address on the VNX5500 storage device(s).
2.  Make sure that a "jumbo-mtu" policy map is created at Nexus 5000 series servers with default class having MTU 9216. Make sure that the "jumbo-mtu" policy is applied to the system classes on the ingress traffic.
3.  Make sure that the traffic from storage array to Cisco UCS B200 M3 Blade Servers are marked properly.
4.  Make sure that the MTU 9216 is set in the Cisco UCS Manager system class configuration, and QoS policy is correctly set in the port-profiles.
5.  Make sure that the 9000 MTU is set for vmkernel ports used for vMotion as well as storage access VNICs.

 

ON the UCS I have a best effort system calss at 9216

 

Wouldnt that take care it?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card