Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Attention: The Community will be in read-only mode on 12/14/2017 from 12:00 am pacific to 11:30 am.

During this time you will only be able to see content. Other interactions such as posting, replying to questions, or marking content as helpful will be disabled for few hours.

We apologize for the inconvenience while we perform important updates to the Community.

New Member

P81e Performance

Hi all.

I have a question about performance in my servers.

I have two UCS C200M2 with P81E VIC Cards, Twinax Cables connected to a Nexus 5548UP and a Netapp storage. I have detected low performance between servers and Netapp storage and Low transfer between servers. SO is Windows Server 2012 with Hyper-V. I don't know if is a bad card configuration or N5k problem.

Many thanks.

Regards.

Jordi.

5 REPLIES
VIP Green

I assume you use IP storage,

I assume you use IP storage, iSCSI and/or NFS ?

What means low ? numbers please ?

- do you use jumbo frames

- did you check counters on the ethernet interfaces, e.g. packet drops, retransmission

- do you use HA, multipathing; did you try to disable one path ?

New Member

Hi wdey,Many thanks for your

Hi wdey,

Many thanks for your answer.

I'm using VIC P81e, with 2 uplinks FCoE, 1xNexus 5548UP and FC Storage Controller. 

I have configured Jumbo Frames in vNIC adapter and Nexus 5k and multipath is activated in the SO.

Nexus5k(config)# sh queuing interface ethernet 1/1
Ethernet1/1 queuing information:
  TX Queuing
    qos-group  sched-type  oper-bandwidth
        0       WRR             50
        1       WRR             50

  RX Queuing
    qos-group 0
    q-size: 360960, HW MTU: 9216 (9216 configured)
    drop-type: drop, xon: 0, xoff: 360960
    Statistics:
        Pkts received over the port             : 4781
        Ucast pkts sent to the cross-bar        : 3960
        Mcast pkts sent to the cross-bar        : 821
        Ucast pkts received from the cross-bar  : 898314
        Pkts sent to the port                   : 946774
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Inactive), Tx (Inactive)

    qos-group 1
    q-size: 79360, HW MTU: 2158 (2158 configured)
    drop-type: no-drop, xon: 20480, xoff: 40320
    Statistics:
        Pkts received over the port             : 22463830
        Ucast pkts sent to the cross-bar        : 22313680
        Mcast pkts sent to the cross-bar        : 150150
        Ucast pkts received from the cross-bar  : 16914320
        Pkts sent to the port                   : 17016505
        Pkts discarded on ingress               : 0
        Per-priority-pause status               : Rx (Inactive), Tx (Active)

  Total Multicast crossbar statistics:
    Mcast pkts received from the cross-bar      : 85110

 

(partial sh interface eth1/1)

Last clearing of "show interface" counters never
  30 seconds input rate 7960 bits/sec, 2 packets/sec
  30 seconds output rate 3416 bits/sec, 3 packets/sec
  Load-Interval #2: 5 minute (300 seconds)
    input rate 9.38 Kbps, 2 pps; output rate 3.15 Kbps, 3 pps
  RX
    22548821 unicast packets  41863 multicast packets  129701 broadcast packets
    22720385 input packets  28005615655 bytes
    12720321 jumbo packets  0 storm suppression bytes
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
  TX
    17813798 unicast packets  749518 multicast packets  111968 broadcast packets
    18675284 output packets  17792426291 bytes
    5500583 jumbo packets
    0 output errors  0 collision  0 deferred  0 late collision
    0 lost carrier  0 no carrier  0 babble 0 output discard
    0 Tx pause
  7 interface resets

 

 

But if I try to do ping from Server 1 to Server 2 with -f -l 8700 option, ping is not successfully.

 

Throught 10G link, data transfer is about 120 MB/seg only between servers.

 

VIP Green

If I understand you correctly

If I understand you correctly, you have have vhba's on your OS, running FCoE between server and N5K, and then classical FC between N5k and Netapp ?

New Member

Correct.Thanks.

Correct.

Thanks.

VIP Green

Need further clarification-

Need further clarification

- the 120 MB/sec is IP traffic between the 2 servers

- you also have FC performance issues, between server and storage ?

- it seems that jumbo frames are not working either between the 2 servers ?

 

I assume you know

http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Data_Center/UF_FCoE_final.html

http://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus5000/sw/fcoe/b_Cisco_Nexus_5000_Series_NX-OS_Fibre_Channel_over_Ethernet_Configuration_Guide_/Cisco_Nexus_5000_Series_NX-OS_Fibre_Channel_over_Ethernet_Configuration_Guide__chapter3.html

 

 

 

63
Views
0
Helpful
5
Replies
CreatePlease to create content