cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2440
Views
0
Helpful
6
Replies

Slow transfer rate in stack 3750

Telindus Espana
Level 1
Level 1

We noted a very low rate of transfer in the stack when the stack has to do inter vlan routing, if the PCs are in the same vlan everything works fine.

The stack does not have a complicated setup, QoS is not applied to the ports with which we tested, and access lists, nothing more

I have read online and there are more people with this problem. Anyone can help me?

Switch   Ports  Model              SW Version              SW Image           
------   -----  -----              ----------              ----------         
*    1   12     WS-C3750G-12S      12.2(35)SE5             C3750-IPBASE-M     
     2   12     WS-C3750G-12S      12.2(35)SE5             C3750-IPBASE-M     
     3   28     WS-C3750G-24TS-1U  12.2(35)SE5             C3750-IPBASE-M     
     4   28     WS-C3750G-24TS-1U  12.2(35)SE5             C3750-IPBASE-M  

Thanks in advance.

6 Replies 6

Peter Paluch
Cisco Employee
Cisco Employee

Hello Oscar,

Interesting. Does this happen for all inter-VLAN routing, or is only a particular combination of VLANs (from-to) affected?

For a multilayer switch in elementary configuration, intra-VLAN switching shall be as effective as inter-VLAN routing. The reason for a dramatic decline in the throughput of the routed traffic is often caused by the fact that the traffic has to be processed by the CPU, instead of being routed in hardware. IP packets that cannot be routed in hardware include packets requiring fragmentation (check the MTU settings!), IP options, IP packets requiring ICMP responses and other packets that are unsupported within the CEF/TCAM infrastructure.

Try using the show cef not-cef-switched and show ip cef switching statistics commands to verify the counts of punted packets (packets sent to the CPU instead of being switched in hardware). High counts of these packets increasing intensively in time indeed suggest that the traffic is sent to the CPU. Also verify if there are any punt-type adjacencies installed in the TCAM using the show ip cef adjacency punt command - ideally, there shall be none.

Also, what is the SDM template you are using? Is there enough space in the TCAM for your IPv4 routing information? Verify it using the show platform tcam utilization to see if the "Used Masks/values" column reports a smaller usage for IPv4 routes than the "Max Masks/Values" column. If a route can not be installed into the TCAM, the packets will again be punted.

Best regards,

Peter

Thanks for your answer Peter

We have several VLANs, but they all have only communication with VLAN2

VLAN2 interface
ip address 10.43.20.49 255.255.255.0 secondary
ip address 10.43.22.49 255.255.255.0 secondary
ip address 10.43.26.49 255.255.255.0 secondary
ip address 10.43.21.53 255.255.255.0
no ip unreachable

If I connect a PC to gi 4/0/23 vlan 2 with direction of range 10.43.20.X and other PC to gi 4/0/24 vlan 2 with IP of range 10.43.21.X, transfer rate very very slow

If I connect a PC to gi 4/0/23 vlan 2 with direction of range 10.43.22.X and other PC to gi 4/0/24 vlan 2 with IP of range 10.43.26.X, transfer rate its good

(without level 3) --> If I connect a PC to gi 4/0/23 vlan 2 with direction of range 10.43.21.X and other PC to gi 4/0/24 vlan 2 with IP of range 10.43.21.X, transfer rate its good

If I connect a PC to gi 4/0/23 change to vlan 3 with direction of range X.X.X.X and other PC to gi 4/0/24 vlan 2 with IP of range 10.43.21.X, transfer rate very very slow

The tests tell me I'm not going to be able to do until Tuesday.

Best Regards.

Hello Oscar,

Thank you for the information you've provided. I will have to go over it more carefully. In the meantime, please, can you verify the several show commands I have suggested in my first post? These are not disruptive and can be performed anytime.

EDIT: I apologize, I did not understand you won't be able to perform those show commands until Tuesday. I apologize - no problem, we'll wait till then.

Best regards,

Peter

Hello!!

I managed to reproduce the problem in laboratory with 1 only swicht

The transfer rates are very low with this test (attached transfer rate)

PC1 10.43.21.50
DG  10.43.21.53----- Gb 5/0/23 (vlan2) SWITCH ----Gb 5/0/23 (vlan2)  PC2 10.43.20.50
                                                                                                   DG  10.43.20.49

Backbone#show run int vlan 2
Building configuration...

Current configuration : 256 bytes
!
interface Vlan2
  ip address 10.43.20.49 255.255.255.0 secondary
ip address 10.43.22.49 255.255.255.0 secondary
ip address 10.43.26.49 255.255.255.0 secondary
ip address 10.43.21.53 255.255.255.0
no ip unreachables

throubleshotting commands
Backbone#show cef not-cef-switched
% Command accepted but obsolete, see 'show (ip|ipv6) cef switching statistics [feature]'
IPv4 CEF Packets passed on to next switching layer
Slot  No_adj No_encap Unsupp'ted Redirect  Receive  Options   Access     Frag
RP         1       0           9        0        0        0        0        0
Backbone#show ip cef switching statistics
       Reason                          Drop       Punt  Punt2Host
RP LES No adjacency                       0          0          1
RP LES Incomplete adjacency               0          0          8
RP LES Total                              0          0          9
All    Total                              0          0          9
Backbone#show ip cef adjacency punt
Prefix               Next Hop             Interface
Backbone#show platform tcam utilization
CAM Utilization for ASIC# 0                      Max            Used
                                             Masks/Values    Masks/values
Unicast mac addresses:                        784/6272         12/32   
IPv4 IGMP groups + multicast routes:          144/1152          6/26   
IPv4 unicast directly-connected routes:       784/6272         12/32   
IPv4 unicast indirectly-connected routes:     272/2176         11/63   
IPv4 policy based routing aces:                 0/0             0/0    
IPv4 qos aces:                                528/528          18/18   
IPv4 security aces:                          1024/1024        102/102  
Note: Allocation of TCAM entries per feature uses
a complex algorithm. The above information is meant
to provide an abstract view of the current TCAM utilization
Could be a problem with secondary IPs? I will continue testing
Best Regards,

HelloT

he tests I've done in the LAB are not reliable, I have 2 PCs directly connected I have the same transfer rate, I'm trying to send me the output of the commands you asked me.

Regards.

Hello

These are the commands on the switch that has the problem:

show cef not-cef-switched
% Command accepted but obsolete, see 'show (ip|ipv6) cef switching statistics [feature]'

IPv4 CEF Packets passed on to next switching layer
Slot  No_adj No_encap Unsupp'ted Redirect  Receive  Options   Access     Frag
RP    546463       0      546483        0        0        0        0        0
3          0       0           0        0        0        0        0        0
2          0       0           0        0        0        0        0        0
1          0       0           0        0        0        0        0        0
Backbone#
Backbone#show ip cef switching statistics

       Reason                          Drop       Punt  Punt2Host
RP LES No route                           6          0          0
RP LES No adjacency                   87576          0     546488
RP LES Incomplete adjacency               0          0          1
RP LES TTL expired                        0          0         19
RP LES Total                          87582          0     546508

All    Total                          87582          0     546508
Backbone#
Backbone#show ip cef adjacency punt
Prefix               Next Hop             Interface
Backbone#
Backbone#show platform tcam utilization

CAM Utilization for ASIC# 0                      Max            Used
                                             Masks/Values    Masks/values

Unicast mac addresses:                        784/6272         79/571  
IPv4 IGMP groups + multicast routes:          144/1152          6/26   
IPv4 unicast directly-connected routes:       784/6272         79/571  
IPv4 unicast indirectly-connected routes:     272/2176         17/104  
IPv4 policy based routing aces:                 0/0             0/0    
IPv4 qos aces:                                528/528          18/18   
IPv4 security aces:                          1024/1024        110/110 

Note: Allocation of TCAM entries per feature uses
a complex algorithm. The above information is meant
to provide an abstract view of the current TCAM utilization


Backbone#

Regards.

Review Cisco Networking products for a $25 gift card