cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1406
Views
0
Helpful
2
Replies

Remedy poor 10GbE performance

planzone
Level 1
Level 1

Greetings All,

I hope anyone that anyone that is taking the time to read this can help me with my plight.

We are getting poor perfromace out of our 10Gig infrastructure and I am not certain where the problem lies or where even to look. The network administrator is reviewing things but he is stumped as well. Our environment has Nexus 5000 series switches (I can not tell you exact model etc about them as I do not administer them nor have the access to do so) We are using 10GigE over fiber.

The scenario is as follows - our esxi4.1 virtual infrastructure is all 10Gig SFP's (HP) and our Netapp SAN has the 10Gig as well. We use Commvault as our backup solution and those windows 2008R2 boxes have 10GigE SFP modules in them as well. We barely get about 1Gb of xfer rate.

So to perform a process of elimination I took the SAN and the virtual infrastructure out of it.

I copied a ~5GB file from windows server A to Windows server b (both on 10GbE infrastructure) and still the same performance of barely 1Gb.

My method, albeit not too scientific was a robocopy to a shared drive on that other server. We also turned off flowcontrol on the servers nics and the nexes ports. still no difference.

One thing we did not try and that was enable jumbo frames. However this would be a bigger project and the net admin is hesitant on tackling that.

My problem is that we have quite the expensive 10Gig infrastructure and not being utilized to its fullest. As the private cloud we are building is getting bigger backup windows are shrinking and having this perform at 10Gb would be awesome.

What do we need to do to get this to perform at almost or at 10Gb speeds?

I also noticed while the file was copying that the TCP off load and performance charts on the servers were bored and barely plotted anything.

So I suppose my questions are (the obvious):

. what tool or tools can I use to actually montior the performce.

. is enabling jumbo frames the answer?

. any other ideas on what may need to be tweaked? part of me thinks it is software and part of me things something with the network config. but what are the magical settings to enjoy 10Gb performing nirvana??

thanks for reading.

2 Replies 2

johnnylingo
Level 5
Level 5

I don't know if these commands are supported on the Nexus, but on a 6500 I'd recommend these 3:

show interface counter errors - Checks error counts on interfaces

show flowcontrol - Displays flow control status on interfaces

show platform hardware capacity fabric - Shows switch fabric resources by module / channel

However, the most likely explanation is it's simply a limitation of the servers.  What is the max transfer rate of the disks and controllers?   That is the first place to look for a bottleneck when copying files. 

Joseph W. Doherty
Hall of Fame
Hall of Fame

Disclaimer

The   Author of this posting offers the information contained within this   posting without consideration and with the reader's understanding that   there's no implied or expressed suitability or fitness for any purpose.   Information provided is for informational purposes only and should not   be construed as rendering professional advice of any kind. Usage of  this  posting's information is solely at reader's own risk.

Liability Disclaimer

In   no event shall Author be liable for any damages whatsoever (including,   without limitation, damages for loss of use, data or profit) arising  out  of the use or inability to use the posting's information even if  Author  has been advised of the possibility of such damage.

Posting

One possible issue, do your TCP receiving hosts provide a RWIN large enough to support your BDP (bandwidth delay product)?

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: