×

Warning message

  • Cisco Support Forums is in Read Only mode while the site is being migrated.
  • Cisco Support Forums is in Read Only mode while the site is being migrated.

7945 POE Issue

Unanswered Question
Sep 4th, 2012
User Badges:

         We are runing several different types of C-6500's, all recently upgraded to s72033-ipservicesk9-mz.122-33.SXI9, the 7945 phones are running  SCCP45.9-2-1S. 


     Randomly phones accross several switches, no common switch, port or card, will lose power, but the port for that switch still shows up/up.  This is resolved by reseting the card and turning off port security or just resetting the card.  Power allocation is not the issue, there is more than required.  Below is what we see when one of the phones is off.  This does not appear to happen to a phone twice, once this is done it seams to stay up, but it will then happen to another phone or group of phones.


Gi1/0/4   auto   on         15.4   Ieee PD             3     30.0


     The only change made has been the IOS update, everything else is as it has been prior to the upgrade.


     Thanks ahead of time for any assistance.

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
Leo Laohoo Tue, 09/04/2012 - 16:54
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

Hmmmm ... Looks like a cable issue.


On the switch, please run the following commands:


1.  Command:  test cable tdr interface Gi1/0/4;

2.  Wait for approximately 62 seconds (yes, that's how long it'll take for this test to complete on a 6500 chassis);

3.  Command:  sh cable tdr interface Gi1/0/4; and

4.  Post the output.

geprosser Wed, 09/05/2012 - 09:03
User Badges:

Thanks for the replay, I will run that when I get a chance, but do you have any other possible causes.  These where all working correctly before the IOS was upgraded to s72033-ipservicesk9-mz.122-33.SXI9.  Also the problem has not resurfaced once everything was reset.

Leo Laohoo Wed, 09/05/2012 - 15:26
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

Thanks for the replay, I will run that when I get a chance, but do you have any other possible causes.

For the last two years, I've been deploying WAPs all over my network (90+ sites) and I've seen my fair share of WAPs booting up+NOT joining the controllers+and getting the dreaded "IEEE" output.


11 out of 10 times, it's cabling fault.



These where all working correctly before the IOS was upgraded to s72033-ipservicesk9-mz.122-33.SXI9.


IOS upgrade have nothing to do with it.

geprosser Thu, 09/06/2012 - 05:34
User Badges:

Here is the output


TDR test last run on: September 05 14:45:07

Interface Speed Pair Cable length       Distance to fault   Channel Pair

status

--------- ----- ---- ------------------- ------------------- -------

-----------

-

Gi4/21   1000 1-2 36   +/- 20 m       N/A                 Pair A

Terminated


              3-6 44   +/- 20 m       N/A                 Pair B

Terminated


               4-5 44   +/- 20 m       N/A                 Pair C

Terminated


               7-8 46   +/- 20 m       N/A                 Pair D

Terminated

Leo Laohoo Thu, 09/06/2012 - 16:04
User Badges:
  • Super Gold, 25000 points or more
  • Hall of Fame,

    The Hall of Fame designation is a lifetime achievement award based on significant overall achievements in the community. 

  • Cisco Designated VIP,

    2017 LAN, Wireless

The output looks OK.


The phones that constantly go down/up, are these THE SAME ONES?  All the time?


If the answer is yes, can you run the same commands on a few of the phone ports?

geprosser Thu, 09/06/2012 - 16:16
User Badges:

It has only happend to each phone effected once, after that phone was restored it has not happend again to one of those phones.  Here is a rundown of how the issues were resolved.


Switch # 1   First occurrence

Sunday morning we upgraded the switch to a new IOS version (from:

s72033-ipservicesk9-mz.122-18.SXF17b TO: s72033-ipservicesk9-mz.

122-33.SXI9)


Monday morning, soon after duty hours started we had several cards drop

connections. Upon review of incident we noticed that the Cisco phones were

down and that the network connection was disabled. Attempts were made to

re-enable the ports without success. Fix action for this Switch was to

revert back to the last IOS version.


Switch # 2 failure

Tuesday -located in another building. This was switch # 3 located in

that building, with the same failure as experienced on the first switch.

There were only two cards in the switch that failed. We were successful at

re-enabling the ports and also had to remove port security to keep the ports

from failing. Since this fix action we have experienced no more issues with

this switch.


Switch # 3

Wednesday - Another switch slot 1 and slot 2 failed with

the same symptoms. NO network connectivity and all Cisco phones down. After

re-enabling the ports and removal of port security the ports showed to be

connected, however the phones continued to seek IP addresses and there was

still no network connection. We performed a HW-module 1 reset and HW-Module

2 reset and after cards reset, the connections were restored. Since then we

have not experienced any further issues. Port security remains off until

further troubleshooting and isolation of problem.



I hope this is better clarification as to issues/events since new IOS was

loaded.

geprosser Fri, 09/21/2012 - 07:16
User Badges:

Here is a new update


We had another power outage in the same building where we lost the card a week ago. Again we had power issues with phones. Not just individual, sporadic ports, but a logical group of ports. I cleared the problem by doing a shut/no shut range command on the ports affected. It spanned two different cards. So it wasn't just one card.


The real issue is any time we have an outage we will have check to make sure all of the ports have come up correctly. In this case we were at work and all was fixed quickly. Certainly this IOS does not like something.

Actions

This Discussion