Measuring Actual Gigabit Ethernet Bandwidth

Unanswered Question
Oct 1st, 2009
User Badges:
  • Bronze, 100 points or more

So I'm trying to prove that "the network is not the problem". I have two hosts connected to gigabit ports on two different Nexus 2148s. The question is if there is a performance problem with the delivered bandwidth.


I used iperf to measure the bandwidth between the two machines in question and I get the following results:


C:\>iperf -c 165.151.2.71 -w 2m -i 1

------------------------------------------------------------

Client connecting to 165.151.2.71, TCP port 5001 TCP window size: 2.00 MByte

------------------------------------------------------------

[1912] local 165.151.5.102 port 1784 connected with 165.151.2.71 port 5001

[ ID] Interval Transfer Bandwidth

[1912] 0.0- 1.0 sec 54.4 MBytes 456 Mbits/sec

[1912] 1.0- 2.0 sec 53.5 MBytes 449 Mbits/sec

[1912] 2.0- 3.0 sec 55.0 MBytes 462 Mbits/sec

[1912] 3.0- 4.0 sec 51.9 MBytes 435 Mbits/sec

[1912] 4.0- 5.0 sec 49.7 MBytes 417 Mbits/sec



-----------------------------------------------------------

Client connecting to 165.151.2.71, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 63.0 KByte (default)

------------------------------------------------------------

[1912] local 165.151.5.102 port 1814 connected with 165.151.2.71 port 5001

[ ID] Interval Transfer Bandwidth

[1912] 0.0- 1.0 sec 11.8 MBytes 99.2 Mbits/sec [1912] 1.0- 2.0 sec 11.9 MBytes 99.7 Mbits/sec

[1912] 2.0- 3.0 sec 12.1 MBytes 101 Mbits/sec

[1912] 3.0- 4.0 sec 12.1 MBytes 101 Mbits/sec

[1912] 4.0- 5.0 sec 12.0 MBytes 101 Mbits/sec


[1912] Server Report:

[1912] 0.0- 5.0 sec 120 MBytes 100 Mbits/sec 1.311 ms 0/85365 (0%)

[1912] Sent 85365 datagrams


My problem is that I really don't understand how to interpret this information. It doesn't equal 1 Gbps, but does this indicate that there is a problem? What is an "acceptable" figure supposed to look like?


Thanks for any and all help on this !


Jim

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
Giuseppe Larosa Thu, 10/01/2009 - 22:30
User Badges:
  • Super Silver, 17500 points or more
  • Hall of Fame,

    Founding Member

Hello Jim,

try to repeat tests with UDP, TCP has to wait for acks coming in opposite direction before sending more.

(I may be wrong but I think iperf emulates a real TCP session)

Or also try to have two concurrent TCP sessions running between same two hosts.


For example I had ATM in a lab setup years ago and with netperf and TCP I could see only 45 Mbps instead of 100 Mbps : by using two concurrent sessions I could see two TCP sessions running each at 45 Mbps.


Hope to help

Giuseppe


Joseph W. Doherty Fri, 10/02/2009 - 05:12
User Badges:
  • Super Bronze, 10000 points or more

"I have two hosts connected to gigabit ports on two different Nexus 2148s"


What's the topology between the two switches? Anything else being done across the topology while you were running these tests?


With the same end hosts sytems, have you tried same tests back-to-back w/o any network (i.e. just copper cable)? What else, if anything, were end hosts doing during these tests?


PS:

I haven't looked up parameters to iperf. It's possible you're not configured to obtain maximum performance for a gig test, for example 100 Mbps UDP seems very poor when you got better than 4x that using TCP.


Lots of "things" have to be right to obtain full gig performance (BTW, you'll never see 100% because of L2 and/or L3 overhead). I would expect better performance than the numbers your tests show and also expect Cisco switch, itself, perhaps least likely cause for slower than expected performance.

jim_berlow Fri, 10/02/2009 - 11:14
User Badges:
  • Bronze, 100 points or more

Both good ideas, guys. I'll run through some of the tests you suggest and post updates soon.


Thanks,

Jim

Actions

This Discussion