i am having bottle knecks in my current 2960 switches during my nightly backups. my jobs are running at 512 MB/minute and using 75% or higher of the port utilization. i am backing up a signifigant amount of data and it is taking 35 hours. what ideas do we have to illiminate bottlekneck and reduce backup times.
David, could you further explain what connects to what (end-to-end), and at what link speeds?
512MBps and 75% - 100 Mbps host connections?
If your looking for a really big reduction in backup time, you'll likely need to move up to gig. You're already running at better than 70% of 100 Mbps, so even if we can get another 20% you would only shave off about a third of your backup time.
Things to check for optimal performance, and if the backup is using TCP; receiver has receive buffer sized for BDP (bandwidth delay product), check for drops or other errors on LAN port interfaces along the path, insure you're not losing needed bandwidth to other traffic on shared link, and insure you not oversubscribing path bandwidth (e.g. eleven 100 Mbps hosts sending to gig uplink or gig backup server concurrently).
i have the server using symantec backup, NIC set to auto, connected to a GIG port on the 2960. The other server the same, NIC set to auto, connected to a GIG port. when I open the task manager on the servers and look at the network usage, they are both under 25%. how ever when Look at the port usage throughthe Cisco Network Assistant, I see that both the ports are 75% or higher used bandwidth. I am not opposed to upgrading to a GIG switch, however I have the servers plugged in GIG now. Port trunking two GIG ports is where I am leaning. comments?
Are the hosts on the same 2960 or different 2960s? If the latter, please describe the topology.
You've described transfer rate as 512 MBps, this is about 69 Mbps, which is about right for 75% of 100 Mbps, but not 75% of gig (on switch) or 25% of gig (on server).
Are you sure both host ports are actually running at gig? The other thing that comes to mind, if there is a duplex error, it might account for the high port percentages but very low throughput (7% of gig seems reasonable if there duplex mismatch).
Initially the host were not on the same 2960, where I was getting the same behaviour. For testing, i moved the hosts to the same 2960. the NIC on the server is set to auto.
If you happen to be able to look at the switch while its happening use the show controllers utilization command and it will show you how busy every single port is along with how much load is on the switch fabric itself and will give you an idea if the switch is actually a bottleneck or something else is going on. Use the show interface counters errors command to look for speed/duplex mismtaches and ports taking errors. Make sure any nic settings match what the ports are set for , auto if the switchport is auto and hardcoded if the switchport is hardcoded otherwise this will cause major slowdowns specially during heavy transfers.
In sh_int.txt you show an active gig port and active fastE port, not two active gig ports?
Also on the active gig port, g0/1, shows the connection running at 100 Mbps?
GigabitEthernet0/1 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is
MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
reliability 255/255, txload 167/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:01, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 724000 bits/sec, 84 packets/sec
5 minute output rate 65674000 bits/sec, 5500 packets/sec
4016127 packets input, 2693618372 bytes, 0 no buffer
Received 755 broadcasts (0 multicast)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 0 multicast, 0 pause input
0 input packets with dribble condition detected
352246016 packets output, 3577238495 bytes, 0 underru 0 output errors, 0 collisions, 1 interface resets
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 PAUSE output
0 output buffer failures, 0 output buffers swapped out
GigabitEthernet0/2 shows gig connection, but if other host is on FastE, you're not going to get gig speed host-to-host. Also e24 shows being down?
Unless your Internet connection provides more that 100 Mbps, why connected there?
after this post, I am off to get coffee. it is port 22 that i am refering to 24. I am going to swap the host on 22 with the host on G1 - and monitor again. I still see high bandwidth on G2. what do the outputs look like to you? does it make scense to trunk two GIG ports to increase speed on switch?
At long as either host is connected to a fastE port, 73+ Mbps is reasonable. As I think I wrote before, we can try to increase the bit rate toward 100 Mbps, but it's likely much, much easier to use gig ports.
What I haven't seen, I believe, is stats for both hosts running on gig ports that are running at gig.
If by trunking ports, you mean to form a multi-port channel, that will likely not increase bandwidth between single hosts (unless there's also port hashing). Such Ethernet channels, at least on Cisco, don't use different ports for every packet for an individual flow, they alternate flows (based on some hash forumula) across the ports.
"I still see high bandwidth on G2."
From you prior attachment, outbound load is under 7%. I wouldn't consider that high bandwidth for gig.