cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1548
Views
5
Helpful
29
Replies

improving download/upload speed on catalyst 6509

cisco24x7
Level 6
Level 6

LinuxES-lab1: 192.168.15.110/24

LinuxES-lab2: 192.168.15.100/24

Win2k3-1: 192.168.15.111/24

All 3 devices are connected to a Cisco Catalyst 6509 with sup-32 and

copper Gigabit Ethernet interface. All 3 devices are dell servers.

lab1 is dell 2550 dual processors 3.0 Ghz with 2GB RAM.

lab2 and Win2k3-1 are dell quad processors 3.1Ghz with 4GB RAM.

Everything on the switch and the interfaces on the server is

hard-code to 1000/full.

I have an FTP Server and Iperf running on LinuxES-lab2. When I

tested iperf from lab1, I get about 856Mbps throughput:

[root@LinuxES-lab1 tmp]# iperf -c 192.168.15.100 -t 10

------------------------------------------------------------

Client connecting to 192.168.15.100, TCP port 5001

TCP window size: 16.0 KByte (default)

------------------------------------------------------------

[ 3] local 192.168.15.110 port 32877 connected with 192.168.15.100 port 5001

[ ID] Interval Transfer Bandwidth

[ 3] 0.0-10.0 sec 1020 MBytes 856 Mbits/sec

[root@LinuxES-lab1 tmp]#

When I tested from Win2k3-1, I get about 600Mbps throughput.

However, when I download a 2GB file from lab1 to lab, I get only

about 325Mbps. If I used Secure Copy (scp), I get only about 72Mbps.

If I used Secure FTP (sFTP), I only get about 24Mbps.

Is there a way to improve the download speed for FTP, scp and sFTP?

Thanks.

29 Replies 29

Linux_1 is connected to port 1

Linux_2 is connected to port 11

Win2k-1 is connected to port 21

same result. Iperf shows 856mbps throughput

while scp and sFTP show very poor performance.

Anymore ideas? Thannks.

Well, if you interconnect your two Linux servers, see what results you get then.

I suspect they be what you've seen so far. That would then point at the hosts and/or their applications.

PS:

BTW, RAID 5 slows writes. It's great for the "I" portion of acronym, but not for write performance. Have you benched the drive standalone? It might account for major portion of the 325 Mbps you've documented.

Jon,

"I appreciate what you said about iperf results but where are your servers patched into the WS-X6148-GE-TX.

The WS-X6148-GE-TX is a heavily oversubscribed blade ie. it has an oversubscription rate of 8:1 so for every 8 ports there is maximum throughput of 1Gbps."

Is this documented anywhere? Can you provide

the link for this? Thanks.

David

You had to ask :-). I can never find the doc that explains all this but Edison Ortiz seems to know where they are whenever we get into these sort of discussions so i've requested he post the link if he has it.

Jon

Ok I replaced the catalyst 6506 with an

Extreme switch. I am now able to push

about 600mbps FTP, 350mbps scp and 300mbps

sFTP. at 350mbps scp throughput, CPU on the

linux boxes is at 90% utilization which is

expected.

It seems to me like the catalyst 6506 can not

scale past 90-100mbps with scp traffics on the

Gig port.

Any ideas anyone? Thanks.

Those are intersting results! :P

I know one solution:

http://www.cisco.com/en/US/products/ps9402/

Out of curiosity, which Extreme switch did you use?

Geoff

Jon,

It's part of the Release Notes.

http://www.cisco.com/en/US/partner/docs/switches/lan/catalyst6500/ios/12.2SXF/native/release/notes/OL_4164.html

As for your previous post, the 6148-GE-TX is as follow:

Number of ports: 48

Number of port groups: 2

Port ranges per port group: 1-24, 25-48

HTH,

__

Edison.

Edison

I'm getting confused now

=============================================

When you use either the WS-X6548-GE-TX or WS-X6148-GE-TX modules, there is a possibility that individual port utilization can lead to connectivity problems or packet loss on the surrounding interfaces. Especially when you use EtherChannel and Remote Switched Port Analyzer (RSPAN) in these line cards, you can potentially see the slow response due to packet loss. These line cards are oversubscription cards that are designed to extend gigabit to the desktop and might not be ideal for server farm connectivity. On these modules there is a single 1-Gigabit Ethernet uplink from the port ASIC that supports eight ports.

---> These cards share a 1 Mb buffer between a group of ports (1-8, 9-16, 17-24, 25-32, 33-40, and 41-48) since each block of eight ports is 8:1 oversubscribed. The aggregate throughput of each block of eight ports cannot exceed 1 Gbps.. <---

Table 4 in the Cisco Catalyst 6500 Series 10/100- & 10/100/1000-Mbps Ethernet Interface Modules shows the different types of Ethernet interface modules and the supported buffer size per port.

=============================================

I'm sure you had diagrams that you posted that showed the port groupings of these modules ?

Jon

Don't be confused. That Release Notes is wrong

I did some digging now and found some internal documents which I can't publish.

The WS-X6148-GE-TX has 2 Pinnacles that connect to the ASIC but these 2 Pinnacles are broken down into 3 Port Groups each. Each Port Group has 8 Ports.

Pinnacle 1

Port Group 1 = Ports 1-8

Port Group 2 = Ports 9-16

Port Group 3 = Ports 17-24

Pinnacle 2

Port Group 1 = Ports 25-32

Port Group 2 = Ports 33-40

Port Group 3 = Ports 41-48

HTH,

__

Edison.

Thanks Edison.

"These line cards are oversubscription cards that are designed to extend gigabit to the desktop and might not be ideal for server farm connectivity. On these modules there is a single 1-Gigabit Ethernet uplink from the port ASIC that supports eight ports."

So let me see if I understand this correctly

since I am a firewall/security person and not

a routing/switching person. Cisco is selling

me a Gigabit line card but the line card can

NOT do gig throughput with my servers. Is

that a correct statement?

Maybe it is time for me to look at Extreme

switches.

David

Your understanding is correct. However if you see where Cisco position this module it is in the wiring closet and not as a server farm blade. So chances are it is unlikely that you will be oversusbcribing too much at any one time.

Obviously it is also cheaper than a module that supports full gigabit throughput on each port although even with the 6748 module there is a little oversubscription ie. 48Gbps ports with a 40Gbps connection to the switch fabric.

Many people get all wound up about gigabit throughput being just that but this module was primarily designed for clients not servers, hence the oversubscription.

Edit - to be more precise, the line card can do gigabit througput on a port but if more than one port in the group of 8 is being used at the same time neither port will have the full gigabit throughput.

Jon

hi,

as a suggestion - try to sniff the traffic which is sending between the linux boxes, with iperf test, ftp and scp.

I'm pretty sure that in scp test you will get a lot of "retransmittions" and the TCP window size will not go up to the limit.

but if you start additional scp sessions you will see then summ of the scp connections are increasing proportionally to the number of scp session.

The Extreme chassis modules, at least the last time I looked, are also oversubscribed. The G48Te, for example, has an oversubscription of 4:1, 2:1 with 2 MSM. I am by no means an expert on any of this but I do have Extreme gear in our Core. If you are a CLI guy like I assume you are I don't think you will enjoy the interface Extreme has to offer.

Just my two cents.

Joseph W. Doherty
Hall of Fame
Hall of Fame

(Indentation was getting a bit much.)

There's still something odd about this. Regardless of the oversubscription capacity of the card, why such differences in traffic rates between Iperf, FTP, and scp/sFTP on the 6500?

Yes, the last set of stats, on the Extreme switch, show scp/sFTP at half the rate of FTP with server being CPU constrained, but not the same proportions across the 6500. I.e. it makes sense IPerf would be the fastest, likely NIC limited; followed by straight FTP, perhaps disk limited; followed by scp/sFTP, CPU limited. What doesn't make sense why, if the 6500 could handle 856 Mbps Iperf and 600 Mbps windows, it couldn't also handle similar bandwidths for the other traffic as did the Extreme switch.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:

Review Cisco Networking products for a $25 gift card