Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

mtu 1500 shows in vcenter but service profile template is at 9000

I have 2 iscsi nics configured on my service profile and they are set to 9000

but in vcenter it is still showing the nics at 1500mtu

 

any idea?

30 REPLIES
Cisco Employee

Hi Tony, Did you set MTU size

Hi Tony,

 

Did you set MTU size to 9000 on vcenter switch? (see below)

 

 

 

New Member

yes

yes

Cisco Employee

Where in UCS did you set the

Where in UCS did you set the MTU size?

 

Also if you SSH to one of the ESXi host  and run the following command, what do you see?

 

esxcfg-vmknic -l

 

you can also check the mtu size on the UCS to verify it is set to 9000 (see screenshot)

New Member

mtu was set as part of the

mtu was set as part of the vnic template for those iscsi nics 

 

 

esxcfg-vmknic-ls on those iscsi nics are showing 9000 mtu

 

vmkping -s 9000 to my iscsi storage appliance replies fine

New Member

in cdp this is showing as

in cdp this is showing as 1500mtu on the iscsi nic in vcenter on the fabric interconnect

port id

vethernet2737

 

 

on the FI

 

UCSA-A(nxos)# sh run interface vethernet 2737

!Command: show running-config interface Vethernet2737
!Time: Mon Mar 17 15:29:04 2014

version 5.0(3)N2(2.11a)

interface Vethernet2737
  description server 1/2, VNIC vmnic6_iscsi_A
  switchport mode trunk
  untagged cos 5
  no pinning server sticky
  pinning server pinning-failure link-down
  switchport trunk native vlan 60
  switchport trunk allowed vlan 60
  bind interface port-channel1298 channel 2737
  service-policy type queuing input org-root/ep-qos-xxxxxxxxxxxxxxxxxxxxx
  no shutdown

Cisco Employee

Hey Tony, Looks like issue is

Hey Tony,

 

Looks like issue is just cosmetic from the vCenter side, but  to make sure we are not  fragmenting , try to add the -d to the vmkping command see below

 

vmking -d –s 9000

New Member

/var/log # vmkping -d -s 9000

/var/log # vmkping -d -s 9000 10.60.1.106
PING 10.60.1.106 (10.60.1.106): 9000 data bytes
sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)

Cisco Employee

Can you also try vmkping -d

Can you also try vmkping -d -s 8000 10.60.1.106?

New Member

I am getting the same error

I am getting the same error

 

sendto() failed (Message too long)
sendto() failed (Message too long)
sendto() failed (Message too long)

Cisco Employee

Looks like there is

Looks like there is configuration issue with jumbo frame along the path from your server to the storage array. Make sure you configure jumbo frame end to end. See below for UCS configuration example http://ucstech.blogspot.com/2011/06/qos-on-ucs.html
New Member

from my host, I am able tp

from my host, I am able tp vmkping -s 9000 to the storage device and responds fine

 

but the -d is failing

 

what should that mean?

New Member

i also verified my qos policy

i also verified my qos policy on my iscsi nics are set at platinum 

 

CoS 5

weight 10

mtu 9216

VIP Red

The vmkernel ping must be

The vmkernel ping must be used with 8972 not 9000; 28 bytes is the protocol overhead of IP and ICMP, see eg.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003728

http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/ucs_vspex_250vm.html#wp700362

Jumbo MTU Validation and Diagnostics
To validate the jumbo MTU from end-to-end, SSH to the ESXi host. By default, SSH access is disabled to ESXi hosts. Enable SSH to ESXi host by editing hosts' security profile under "Configuration" tab.
When connected to the ESXi host through SSH, initiate ping to the NFS storage server with large MTU size and set the "Do Not Fragment" bit of IP packet to 1. Use the vmkping command as shown in the example:
Example 5
~ # vmkping -d -s 8972 10.10.40.6411
PING 10.10.40.64 (10.10.40.64): 8972 data bytes
8980 bytes from 10.10.40.64: icmp_seq=0 ttl=64 time=0.417 ms
8980 bytes from 10.10.40.64: icmp_seq=1 ttl=64 time=0.518 ms
8980 bytes from 10.10.40.64: icmp_seq=2 ttl=64 time=0.392 ms
--- 10.10.40.64 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.392/0.442/0.518 ms
~ #
Ensure that the packet size is 8972 due to various L2/L3 overhead. Also ping all other hosts' vMotion and NFS vmkernel interfaces. Ping must be successful. If ping is not successful verify that 9000 MTU configured. Follow these steps to verify:
1.  9000 MTU on the NFS share IP address on the VNX5500 storage device(s).
2.  Make sure that a "jumbo-mtu" policy map is created at Nexus 5000 series servers with default class having MTU 9216. Make sure that the "jumbo-mtu" policy is applied to the system classes on the ingress traffic.
3.  Make sure that the traffic from storage array to Cisco UCS B200 M3 Blade Servers are marked properly.
4.  Make sure that the MTU 9216 is set in the Cisco UCS Manager system class configuration, and QoS policy is correctly set in the port-profiles.
5.  Make sure that the 9000 MTU is set for vmkernel ports used for vMotion as well as storage access VNICs.

 

New Member

ON the UCS I have a best

ON the UCS I have a best effort system calss at 9216

 

Wouldnt that take care it?

Cisco Employee

Tony, How is your UCS

Tony,

 

How is your UCS connected to the storage (topology)?

New Member

  everything is connected

 

 

everything is connected directly to a pair of redundant 7k

 

2 redundant FIs, 2 uplinks each to each 7k. 

 

 

 

New Member

from the core switch itself

from the core switch itself when i try to ping the appliance ip I get this

 

# ping 10.60.1.106 packet-size 9216 c 20
PING 10.60.1.106 (10.60.1.106): 9216 data bytes
9224 bytes from 10.60.1.106: icmp_seq=0 ttl=63 time=10.943 ms
Request 1 timed out
9224 bytes from 10.60.1.106: icmp_seq=2 ttl=63 time=8.93 ms
Request 3 timed out
9224 bytes from 10.60.1.106: icmp_seq=4 ttl=63 time=17.072 ms
Request 5 timed out
9224 bytes from 10.60.1.106: icmp_seq=6 ttl=63 time=8.977 ms
Request 7 timed out
9224 bytes from 10.60.1.106: icmp_seq=8 ttl=63 time=8.984 ms
Request 9 timed out
9224 bytes from 10.60.1.106: icmp_seq=10 ttl=63 time=9.028 ms
Request 11 timed out
9224 bytes from 10.60.1.106: icmp_seq=12 ttl=63 time=8.827 ms
Request 13 timed out
9224 bytes from 10.60.1.106: icmp_seq=14 ttl=63 time=9.171 ms
Request 15 timed out
9224 bytes from 10.60.1.106: icmp_seq=16 ttl=63 time=9.054 ms
Request 17 timed out
9224 bytes from 10.60.1.106: icmp_seq=18 ttl=63 time=8.76 ms
Request 19 timed out

--- 10.60.1.106 ping statistics ---
20 packets transmitted, 10 packets received, 50.00% packet loss
round-trip min/avg/max = 8.76/9.974/17.072 ms
# ping 10.60.1.106 packet-size 8000 c 20
PING 10.60.1.106 (10.60.1.106): 8000 data bytes
8008 bytes from 10.60.1.106: icmp_seq=0 ttl=63 time=2.49 ms
8008 bytes from 10.60.1.106: icmp_seq=1 ttl=63 time=6.139 ms
8008 bytes from 10.60.1.106: icmp_seq=2 ttl=63 time=7.22 ms
Request 3 timed out
8008 bytes from 10.60.1.106: icmp_seq=4 ttl=63 time=2.126 ms
8008 bytes from 10.60.1.106: icmp_seq=5 ttl=63 time=5.761 ms
8008 bytes from 10.60.1.106: icmp_seq=6 ttl=63 time=6.429 ms
Request 7 timed out
8008 bytes from 10.60.1.106: icmp_seq=8 ttl=63 time=1.78 ms
8008 bytes from 10.60.1.106: icmp_seq=9 ttl=63 time=6.466 ms
8008 bytes from 10.60.1.106: icmp_seq=10 ttl=63 time=6.21 ms
Request 11 timed out
8008 bytes from 10.60.1.106: icmp_seq=12 ttl=63 time=1.716 ms
8008 bytes from 10.60.1.106: icmp_seq=13 ttl=63 time=6.277 ms
8008 bytes from 10.60.1.106: icmp_seq=14 ttl=63 time=14.51 ms
Request 15 timed out
8008 bytes from 10.60.1.106: icmp_seq=16 ttl=63 time=2.685 ms
8008 bytes from 10.60.1.106: icmp_seq=17 ttl=63 time=5.212 ms
8008 bytes from 10.60.1.106: icmp_seq=18 ttl=63 time=6.959 ms
Request 19 timed out

New Member

tried different sizes and

tried different sizes and still some got dropped

 

 ping 10.60.1.106 packet-size 800 c 20
PING 10.60.1.106 (10.60.1.106): 800 data bytes
808 bytes from 10.60.1.106: icmp_seq=0 ttl=63 time=1.995 ms
808 bytes from 10.60.1.106: icmp_seq=1 ttl=63 time=1.151 ms
808 bytes from 10.60.1.106: icmp_seq=2 ttl=63 time=1.235 ms
808 bytes from 10.60.1.106: icmp_seq=3 ttl=63 time=1.005 ms
808 bytes from 10.60.1.106: icmp_seq=4 ttl=63 time=1.844 ms
808 bytes from 10.60.1.106: icmp_seq=5 ttl=63 time=2.016 ms
808 bytes from 10.60.1.106: icmp_seq=6 ttl=63 time=1.425 ms
808 bytes from 10.60.1.106: icmp_seq=7 ttl=63 time=0.993 ms
808 bytes from 10.60.1.106: icmp_seq=8 ttl=63 time=1.612 ms
808 bytes from 10.60.1.106: icmp_seq=9 ttl=63 time=1.056 ms
808 bytes from 10.60.1.106: icmp_seq=10 ttl=63 time=2.187 ms
808 bytes from 10.60.1.106: icmp_seq=11 ttl=63 time=2.292 ms
808 bytes from 10.60.1.106: icmp_seq=12 ttl=63 time=2.011 ms
808 bytes from 10.60.1.106: icmp_seq=13 ttl=63 time=2.579 ms
808 bytes from 10.60.1.106: icmp_seq=14 ttl=63 time=2.947 ms
808 bytes from 10.60.1.106: icmp_seq=15 ttl=63 time=2.287 ms
808 bytes from 10.60.1.106: icmp_seq=16 ttl=63 time=0.956 ms
808 bytes from 10.60.1.106: icmp_seq=17 ttl=63 time=1.65 ms
808 bytes from 10.60.1.106: icmp_seq=18 ttl=63 time=2.35 ms
808 bytes from 10.60.1.106: icmp_seq=19 ttl=63 time=1.629 ms

--- 10.60.1.106 ping statistics ---
20 packets transmitted, 20 packets received, 0.00% packet loss
round-trip min/avg/max = 0.956/1.76/2.947 ms
# ping 10.60.1.106 packet-size 1800 c 20
PING 10.60.1.106 (10.60.1.106): 1800 data bytes
1808 bytes from 10.60.1.106: icmp_seq=0 ttl=63 time=1.21 ms
1808 bytes from 10.60.1.106: icmp_seq=1 ttl=63 time=1.428 ms
1808 bytes from 10.60.1.106: icmp_seq=2 ttl=63 time=1.385 ms
1808 bytes from 10.60.1.106: icmp_seq=3 ttl=63 time=1.63 ms
1808 bytes from 10.60.1.106: icmp_seq=4 ttl=63 time=1.506 ms
1808 bytes from 10.60.1.106: icmp_seq=5 ttl=63 time=1.942 ms
1808 bytes from 10.60.1.106: icmp_seq=6 ttl=63 time=1.601 ms
1808 bytes from 10.60.1.106: icmp_seq=7 ttl=63 time=1.893 ms
1808 bytes from 10.60.1.106: icmp_seq=8 ttl=63 time=1.354 ms
Request 9 timed out
1808 bytes from 10.60.1.106: icmp_seq=10 ttl=63 time=1.485 ms
1808 bytes from 10.60.1.106: icmp_seq=11 ttl=63 time=1.894 ms
1808 bytes from 10.60.1.106: icmp_seq=12 ttl=63 time=1.67 ms
1808 bytes from 10.60.1.106: icmp_seq=13 ttl=63 time=1.406 ms
1808 bytes from 10.60.1.106: icmp_seq=14 ttl=63 time=1.611 ms
1808 bytes from 10.60.1.106: icmp_seq=15 ttl=63 time=1.929 ms
1808 bytes from 10.60.1.106: icmp_seq=16 ttl=63 time=1.454 ms
1808 bytes from 10.60.1.106: icmp_seq=17 ttl=63 time=1.488 ms
1808 bytes from 10.60.1.106: icmp_seq=18 ttl=63 time=1.482 ms
Request 19 timed out

Cisco Employee

Hi Tony,

Hi Tony,

Looks like there are some problems on your 7K. How did you configure Jumbo frames on the Nexus 7K?  The configuration should look something like this

 

switch(config)#policy-map type network-qos jumbo
switch(config-pmap-nq)#class type network-qos class-default
switch(config-pmap-c-nq)#mtu 9216
switch(config-pmap-c-nq)#exit
switch(config-pmap-nq)#exit
switch(config)#system qos
switch(config-sys-qos)#service-policy type network-qos jumbo

Did you set the MTU size on the storate to 9000?

New Member

this is what i see.... Looks

this is what i see.... Looks like it is not enabled?

 

 

sh policy-map 


  Type qos policy-maps
  ====================

  policy-map type qos ISCSI 
    class  class-default
      set cos 5

  Type queuing policy-maps
  ========================

  policy-map type queuing default-in-policy 
    class type queuing in-q1
      queue-limit percent 50 
      bandwidth percent 80 
    class type queuing in-q-default
      queue-limit percent 50 
      bandwidth percent 20 
  policy-map type queuing default-out-policy 
    class type queuing out-pq1
      priority level 1
      queue-limit percent 16 
    class type queuing out-q2
      queue-limit percent 1 
    class type queuing out-q3
      queue-limit percent 1 
    class type queuing out-q-default
      queue-limit percent 82 
      bandwidth remaining percent 25
  policy-map type queuing default-4q-8e-in-policy 
    class type queuing 2q4t-8e-in-q1
      queue-limit percent 10 
      bandwidth percent 50 
    class type queuing 2q4t-8e-in-q-default
      queue-limit percent 90 
      bandwidth percent 50 
  policy-map type queuing default-4q-8e-out-policy 
    class type queuing 1p3q1t-8e-out-pq1
      priority level 1
    class type queuing 1p3q1t-8e-out-q2
      bandwidth remaining percent 33
    class type queuing 1p3q1t-8e-out-q3
      bandwidth remaining percent 33
    class type queuing 1p3q1t-8e-out-q-default
      bandwidth remaining percent 33


  Type control-plane policy-maps
  ==============================

  policy-map type control-plane copp-system-p-policy-strict
    class copp-system-p-class-critical
      set cos 7
      police cir 39600 kbps bc 250 ms conform transmit violate drop 
    class copp-system-p-class-important
      set cos 6
      police cir 1060 kbps bc 1000 ms conform transmit violate drop 
    class copp-system-p-class-management
      set cos 2
      police cir 10000 kbps bc 250 ms conform transmit violate drop 
    class copp-system-p-class-normal
      set cos 1
      police cir 680 kbps bc 250 ms conform transmit violate drop 
    class copp-system-p-class-normal-dhcp
      set cos 1
      police cir 680 kbps bc 250 ms conform transmit violate drop 
    class copp-system-p-class-normal-dhcp-relay-response
      set cos 1
      police cir 900 kbps bc 500 ms conform transmit violate drop 
    class copp-system-p-class-redirect
      set cos 1
      police cir 280 kbps bc 250 ms conform transmit violate drop 
    class copp-system-p-class-exception
      set cos 1
      police cir 360 kbps bc 250 ms conform transmit violate drop 
    class copp-system-p-class-monitoring
      set cos 1
      police cir 130 kbps bc 1000 ms conform transmit violate drop 
    class copp-system-p-class-l2-unpoliced
      police cir 8 gbps bc 5 mbytes conform transmit violate transmit 
    class copp-system-p-class-undesirable
      set cos 0
      police cir 32 kbps bc 250 ms conform drop violate drop 
    class copp-system-p-class-l2-default
      police cir 100 kbps bc 250 ms conform transmit violate drop 
    class class-default
      set cos 0
      police cir 100 kbps bc 250 ms conform transmit violate drop 


  Type network-qos policy-maps
  ============================
  policy-map type network-qos default-nq-4e-policy
    class type network-qos c-nq-4e-drop
      congestion-control tail-drop 
      mtu 1500
    class type network-qos c-nq-4e-ndrop-fcoe
      pause
      mtu 2112
    class type network-qos c-nq-4e-ndrop
      pause
      mtu 2112
  policy-map type network-qos default-nq-6e-policy
    class type network-qos c-nq-6e-drop
      congestion-control tail-drop 
      mtu 1500
    class type network-qos c-nq-6e-ndrop-fcoe
      pause
      mtu 2112
    class type network-qos c-nq-6e-ndrop
      pause
      mtu 2112
  policy-map type network-qos default-nq-7e-policy
    class type network-qos c-nq-7e-drop
      congestion-control tail-drop 
      mtu 1500
    class type network-qos c-nq-7e-ndrop-fcoe
      pause
      mtu 2112
  policy-map type network-qos default-nq-8e-policy
    class type network-qos c-nq-8e
      congestion-control tail-drop 
      mtu 1500

Cisco Employee

Hi Tony, Yes, I don't see

Hi Tony,

 

Yes, I don't see jumbo fram enabled on your nexus 7k. When you confiugre jumbo frames you need to make sure that any device where these frames are going to traverse jumbo frames is enable. It won't work if you only enable jumbo frames in one place.

New Member

is the policy map a global

is the policy map a global enable of jumbo frames on the entire 7k? I remember it used to be system mtu 9216 or something like that on the ios switches.

Cisco Employee

Hey Tony,

Hey Tony,

Sorry, I gave you the jumbo frame configuration fo the n5k.  For the 7K you will need to do the following

 

http://www.cisco.com/c/en/us/support/docs/switches/nexus-5000-series-switches/112080-config-mtu-nexus.html

 

Complete these steps in order to set the jumbo frame in a Nexus 7010 Switch:

Nexus-7010

!--- Set the MTU to its maximum  
!--- size (9216 bytes) in order  
!--- to enable the Jumbo MTU
!--- for the whole switch. 


switch(config)#system jumbomtu 9216


!--- Set the MTU specification for an interface.

switch(config)#interface ethernet x/x


!--- By default, Cisco NX-OS configures Layer 3 parameters.
In order to configure Layer 2 parameters, use this command.

switch(config-if)#switchport

switch(config-if)#mtu 9216
switch(config-if)#exit
New Member

i already have that enabled

i already have that enabled already

 

# sh run all | i jumbomtu
system jumbomtu 9216

 

also on all the data iscsi ports

interface Ethernet3/33
  switchport access vlan 60
  spanning-tree port type edge
  speed 1000
  mtu 9216
  no shutdown

# sh run interface ethernet 3/34

!Command: show running-config interface Ethernet3/34
!Time: Wed Mar 19 10:57:22 2014

version 6.1(2)

interface Ethernet3/34
  switchport access vlan 60
  spanning-tree port type edge
  mtu 9216
  no shutdown

# sh run interface ethernet 3/36

!Command: show running-config interface Ethernet3/36
!Time: Wed Mar 19 10:57:23 2014

version 6.1(2)

interface Ethernet3/36
  switchport access vlan 15
  spanning-tree port type edge
  speed 1000
  mtu 9216
  no shutdown

Cisco Employee

what about the storage array,

what about the storage array, is the MTU set to jumbo frames?   What about the CoS on storage array is it the same as your 7K?

 

At this point I'm running out of ideas, I would recomend to open a TAC case with the Nexus 7K to help you with the configuration.

New Member

yes verified on the storage

yes verified on the storage array that the iscsi ports are set to mtu 9000.

 

I do not have any cos on the storage array.

New Member

hi there I checked my vlan

hi there

 

I checked my vlan interface and it shows this

 


interface Vlan60
  no ip redirects
  ip address 10.60.0.1/16
  no ipv6 redirects
  ip proxy-arp
  no hsrp bfd
  hsrp version 2
  hsrp delay minimum 0 reload 0 
  no hsrp use-bia
  hsrp 60 
    authentication cisco
    name hsrp-Vlan60-60
    mac-address 0000.0C9F.F03C
    no preempt
    priority 100 forwarding-threshold lower 1 upper 100
    timers 3 10
    ip 10.60.0.5 
  description SERVERS60
  no shutdown
  mtu 1500
  bandwidth 1000000
  delay 1
  medium broadcast
  snmp trap link-status
  carrier-delay msec 100
  load-interval counter 1 60
  load-interval counter 2 300
  no load-interval counter 3
  mac-address d867.d907.ffc1 
  no management

 

 

looks like it is set at 1500. Do i have to change it there as well? the document did not mention it

 

thanks

Cisco Employee

Before you make the changes

Before you make the changes Tony, are you using FCoE on the Nexus 7 K?

New Member

hi No I am not using fcoe

hi

 

No I am not using fcoe

1190
Views
0
Helpful
30
Replies