cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
753
Views
0
Helpful
7
Replies

4500 switch reports ports as 10/full while other side reports 100 or 1000/full

ashirel
Level 1
Level 1

we have a 4510r-e running  12.2(50)SG1 w/ various rj45 line cards and a 24 port glc card.

at any given time, i see 10's of ports in 10/full.

i go to the station, and find the station in 100/full or 1000/full.

i go to the switch w/ my fluke, connect it directly to the switch w/o any intermediate infrastructure except a 50cm cat6 patch cable.

the fluke reports 1000/full, but THE SWITCH PORT REPORTS STILL 10/FULL.

all ports are config'd 'speed auto'  and 'duplex auto'.

the switch  seems to be erroneously reporting 10/full.

there are no errors logged on any of the ports and there is successful communications even when the station and switch port

report different speeds.

even though there are no errors logged nor reported  by 'show int [port]' nor 'sho int count error' certain killer applications crash on some

stations. (the applications are GHOST (which dumps disk images from a server to multiple stations) and NETOP (which i sused in a classroom

to transmit an instructors screen to a room full of stations)  both of which broadcast and/or multicast.

all nodes involved in the above 2 applics are on the same vlan and same phyiscal subnet.

the ports which report 10/full vary and occur even when the above applics are not in use.

the only way i found to clear this 10/full report is by either a hardware reset of the entire module

or by unplugging the cable, the execute on the port shutdown, speed auto, no shutdown, reconnect cable.

then it's just a matter of time until it pops back to 10/full in a few minutes,hours or days.

any ideas as to the cause of the discrepency in the  speed reported on either side of the wire?

any ideas on how to address the killer applic problems besides restructuring the whole net by defining

separate vlan for each lab of 20 or so stations?

tnx

ams

1 Accepted Solution

Accepted Solutions

does this mean that in later versions it's not disruptive, or that it's non-disruptive?

That is correct.  Higher versions, above 12.2(46)SE, are not disruptive if you are not testing with PoE or Gigabitethernet interface.

just that the writer didn't know for sure that it is disruptive?

Never tried it on a 4500 chassis.  So your result is very, very interesting.

The result for Gig 8/2 and Gig 9/1 has a speed of 0 Mbps???  But there's a length.  Is the port in enable mode?   What IOS are you running?

View solution in original post

7 Replies 7

Leo Laohoo
Hall of Fame
Hall of Fame

Hmmmm ...

Can you try to run a TDR test?  I'm not sure if 4500 support this feature (don't have any to test).

1.  Command:  "test cable tdr int ";

2.  Wait for about 5 to 7 seconds;

3.  Command:  "sh cable tdr interface "; and

4.  Please post the command.

How to use Time-Domain Reflectometer (TDR)

i tried some tdr testing, but only on non-connected ports, since i don't know if the test will disconnect or adversely

affect activity on an the line.

it's stated in u'r attachment on which explains the tdr that w/ ios "12.2(46)SE or earlier, TDR test is DISRUPTIVE".

does this mean that in later versions it's not disruptive, or that it's non-disruptive? or just that the writer didn't know

for sure that it is disruptive? if noone can give me a solid answer on this, i'll just go over the the building in question

and make sure there's no user on the ports when i check them (and make sure the door is locked!).

i have the following cards in the 4500, only the 4548-gb-rj45 gave output.

i assume this means that tdr is not supported on the others.

does anyone know what affect the test has on traffic on active ports?

WS-X4648-RJ45-E

WS-X4648-RJ45V+E

WS-X4548-GB-RJ45

the last card gave output as follows for the ports i checked.

switch4500-beren#test cable tdr int g8/2

switch4500-beren#sho cable tdr int g8/2

Interface Speed  Local pair Cable length Remote channel Status

Gi8/2     0Mbps   1-2         41 +/-10m   Unknown       Fault       

                  3-6         41 +/-10m   Unknown       Fault       

                  4-5         42 +/-10m   Unknown       Fault       

                  7-8         41 +/-10m   Unknown       Fault       

switch4500-beren#test cable tdr int g9/1

switch4500-beren#sh cable tdr int g9/1    

Interface Speed  Local pair Cable length Remote channel Status

Gi9/1     0Mbps   1-2         33 +/-10m   Unknown       Fault       

                  3-6         32 +/-10m   Unknown       Fault       

                  4-5         32 +/-10m   Unknown       Fault       

                  7-8         32 +/-10m   Unknown       Fault       

switch4500-beren#test cable tdr int g10/1

switch4500-beren#sho cable tdr int g10/1

Interface Speed  Local pair Cable length Remote channel Status

Gi10/1    0Mbps   1-2        N/A          Unknown       Terminated  

                  3-6        N/A          Unknown       Terminated  

                  4-5        N/A          Unknown       Terminated  

                  7-8        N/A          Unknown       Terminated  

switch4500-beren#

i hope i get more interesting results when i try w/ connected ports.

does this mean that in later versions it's not disruptive, or that it's non-disruptive?

That is correct.  Higher versions, above 12.2(46)SE, are not disruptive if you are not testing with PoE or Gigabitethernet interface.

just that the writer didn't know for sure that it is disruptive?

Never tried it on a 4500 chassis.  So your result is very, very interesting.

The result for Gig 8/2 and Gig 9/1 has a speed of 0 Mbps???  But there's a length.  Is the port in enable mode?   What IOS are you running?

hi l,

sorry about the delay. had other issues to attend to.

all the ports i check i definitely enabled.

running  12.2(50)SG1 .

concerning the disruptiveness of the test, all of the interfaces are gb. some of the cards are poe.

now, in parallel to the infrastructure issues, i'm planning a workaround of separating the vlan into community

pvlans so that broadcasts from the netop  applic(transmitting the instructors screen contents to screens of student stations) will be localized to a particular lab-classroom and not reak havoc when there are multiple classrooms doing this. it seems to tme that even though the community pvlan was not designed with this in mind, it's a perfect fit.

darren.g
Level 5
Level 5

ashirel wrote:

we have a 4510r-e running  12.2(50)SG1 w/ various rj45 line cards and a 24 port glc card.

at any given time, i see 10's of ports in 10/full.

i go to the station, and find the station in 100/full or 1000/full.

i go to the switch w/ my fluke, connect it directly to the switch w/o any intermediate infrastructure except a 50cm cat6 patch cable.

the fluke reports 1000/full, but THE SWITCH PORT REPORTS STILL 10/FULL.

all ports are config'd 'speed auto'  and 'duplex auto'.

the switch  seems to be erroneously reporting 10/full.

there are no errors logged on any of the ports and there is successful communications even when the station and switch port

report different speeds.

even though there are no errors logged nor reported  by 'show int [port]' nor 'sho int count error' certain killer applications crash on some

stations. (the applications are GHOST (which dumps disk images from a server to multiple stations) and NETOP (which i sused in a classroom

to transmit an instructors screen to a room full of stations)  both of which broadcast and/or multicast.

all nodes involved in the above 2 applics are on the same vlan and same phyiscal subnet.

the ports which report 10/full vary and occur even when the above applics are not in use.

the only way i found to clear this 10/full report is by either a hardware reset of the entire module

or by unplugging the cable, the execute on the port shutdown, speed auto, no shutdown, reconnect cable.

then it's just a matter of time until it pops back to 10/full in a few minutes,hours or days.

any ideas as to the cause of the discrepency in the  speed reported on either side of the wire?

any ideas on how to address the killer applic problems besides restructuring the whole net by defining

separate vlan for each lab of 20 or so stations?

tnx

ams

Some devices just don't auto negotiate properly.

Have you tried manually forcing the affected ports to 1000/full? What happens if you do?

Are you certain your cabling is at minimum Cat5e certified through the whole path to get a proper 1 gig connection across? Are you running close to the maximum cable distance on any of your paths?

You could have malfunctioning hardware - have you got a Smartnet contract on the 4500? If so, log a call with Cisco and see if they can offer further diagnosis.

Cheers.

all of the cabling is precat7.

it's all between cat5...cat6e.

after hesitating for a while, i attempted to set some connections to fixed 100/full, however, on the station side.

many of the nac drivers didn't even support manual setting of speed/duplex.

from what i understand and have seen of various forums, the pre-1gb standards were not as strong (for autoneg that is) as the 1gb standard

both in terms of the technical reliability of the standard itself and the way in which it was implemented in many cases.

however, the 1gb standard is always recommended if both sides of the wire support it. ok, that's the way the world should be.

anyway, that's why i hesistated.

in the end, i set the swtich port to 100/full, even though the station was not configurable, so it was at auto.

i know this is bad medicine.

it seemed ok for a week or so, but yesterday, i notices lots of alignment and runts, as much as 20% of the packet count.

i saw these error ONLY on the 10 ports that i configured to 100/full. i immediately reset the switch ports to auto, zero'd the

counters and it's clean again. unfortunately, i can't say yet if the killer applics survive. since nothing really changed...

i may be able to locate drivers which support manual config of the speed/duplex.

i'll also see if the tdr suggested by another post turns up any surprises.

tnx

ams

ashirel wrote:

all of the cabling is precat7.

it's all between cat5...cat6e.

after hesitating for a while, i attempted to set some connections to fixed 100/full, however, on the station side.

many of the nac drivers didn't even support manual setting of speed/duplex.

from what i understand and have seen of various forums, the pre-1gb standards were not as strong (for autoneg that is) as the 1gb standard

both in terms of the technical reliability of the standard itself and the way in which it was implemented in many cases.

however, the 1gb standard is always recommended if both sides of the wire support it. ok, that's the way the world should be.

anyway, that's why i hesistated.

in the end, i set the swtich port to 100/full, even though the station was not configurable, so it was at auto.

i know this is bad medicine.

it seemed ok for a week or so, but yesterday, i notices lots of alignment and runts, as much as 20% of the packet count.

i saw these error ONLY on the 10 ports that i configured to 100/full. i immediately reset the switch ports to auto, zero'd the

counters and it's clean again. unfortunately, i can't say yet if the killer applics survive. since nothing really changed...

i may be able to locate drivers which support manual config of the speed/duplex.

i'll also see if the tdr suggested by another post turns up any surprises.

tnx

ams

Looking at your symptoms now, I'd say you've either got some dodgy NIC's on some nodes, or your line card has some weird problems with one of the ASIC's running the ports.

Are the ports which "fail" in the same group (I.E. 1-8 or 9-16 etc), or are they random around the switch/line card?

What happens to the problem nodes if you plug them into different ports - does the problem follow them, or does it happen to the new nodes on the same ports?

I'd be leaning to the dodgy NIC possibility myself, since you say it's 10 nodes, and Cisco ASIC's typically run ports in multiples of 8, so getting 10 ports with a problem would mean it was across two different ASIC's, and that one of them was only partially faulty - which would be weird.

Can you replace the NIC's ont he units causing the problems at all??

Cheers.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: