We have a Catalyst 3560G-48 POE that had some POE problems. The switch had been in service, with no problems, for at least the past 6 months with only one POE device running off it, an AIR-AP1242AG-A-KP on port 48. We are in the process of implementing Cisco VoIP and went to this switch to test vlan trunking for voice-vlan. Attempting to plug in a 7975 phone directly into switch, the phones did not even try to power up. We tried several different ports with the same result. We also tried a couple of different models of cisco phones (7961 and 7970) with neither powering up. The phones did not power up at all. Investigating the switch and looking at the show power inline, the switch was reporting 5 or 6 ports having 15w of power in use even though there was only one POE device (wireless AP on port 48) plugged into the switch. It seemed to correspond to the switch ports we had tried plugging the phones in to. The ports were also inactive (nothing plugged into those ports) and still showing 15w of power being used. At this time, the POE wireless AP on port 48 was still working ok. We built up an identical switch and took it out to site and that switch is behaving fine. Got the problem switch back to office for testing and the switch seems to be delivering POE ok. I have tested plugging in 6 or 7 POE phones (7961, 7970, & 7975) to the problem switch and they all power right up and work ok. I am just curious if anyone has seen this type of POE issue with a 3560G-48 POE. We have many of these out in the field and this is the first time we have come across this issue. Could it have just needed a reboot? We are concerned about putting back out in to the field since it had this problem. Could there be something wrong with the POE on this switch? It just seems the switch ports were kind of delivering POE but not enough to power the devices up. What types of tests can we perform to thoroughly test POE on this switch? Or should I contact Cisco support for possible warranty replacement?
I think the ports just need to be bounced (i.e. shut/no shut). Maybe you did but the reboot may have fixed it. I don't know about the rest but I often cringe whenever I find an appliance that had an uptime of more than a year.
Thanks for the reply. I did not try a shut no shut on those ports but I guess a switch reboot would do about the samething. Have you seen same issue with POE switch at all? Do you have a recommendation on switch reboot time frame? We don't reboot our switches regularaly, is there a Cisco recommendation for that?
if i remember correctly i read it that this switch will take close to 3 min for a complete reboot, just make sure that the config has been saved prior to reboot of the device.
I guess I wasn't clear with what I meant by time frame. I was asking about time interval between when a switch is rebooted (e.g. every 3 months, 6 months, etc.). Is there a recommedations on how often your switches should be power cycled?
Back in 2007, we had a stack of 4 3750E PoE. I witnessed a member of the stack suddenly STOPPED giving PoE to all 48-ports. Logs shows everything went "down/down" in a span of 1 second. In panic, I reloaded this switch and everything normalized. What I should've tried was shut/no shut the ports and see if it helped. Using the Cisco Bug Toolkit there was ONE which mentioned that this event couldn't be replicated but it was simply caused by "bad timing" or "bad luck". What I experienced was something akin to a "hole-in-one" in a foggy snow-storm.
I'm currently working with a number of guys who share the same opinion that any Cisco appliance running for more than a year is bad news. We try to find ways and means just to reboot these. IOS upgrade and "oopsie's" are common events.
yeah we saw something like this not too long ago. The fix was a reload of the switch so obviously some kind of code bug. Might want to check bug toolkit and see if there is anything for your release on the 3560.
When you mention a "reload" of the switch, do you mean rebooting or a rebuild of the switch?
I did take a look at the bug tool and there does seem to be some POE issues that may be addressed in later IOS versions than what our switch has. Our switch in question currently has 12.2 <35>SE5.
Also I noticed on the bug tool that it mentions some issues with POE A/Ps.
Thanks for the clarification.
According to Bug Toolkit our IOS release does have some POE issues. The version information on our switch is Cisco IOS Software, C3560 Software (C3560-IPBASE-M), Version 12.2(35)SE5, RELEASE SOFTWARE (fc1). I am new to doing a cisco ios update. The bug toolkit article references the following as updates, 12.2(50)SE, 12.2(44)SE4, 12.2(50)SE1. I am not sure about which file to download IPBASE with or without crypto. Also when I went to download (50)SE1 there was a software advisory regarding POE. Should I download a version up (52) from that or go to the latest version available.
I'm using 12.2(52) and 12.2(53) for PoE and non-PoE switches. It's ok until you enable dot1x.
Word of caution: Don't be alarmed if your switches take twice as long to boot. This is just a one-off because starting from 12.2(50), this version will upgrade your switch's bootstrap and reboot, thus this takes twice as long because the switch reboots twice before settling in.
I am a little concerned because we do use dot1x on our ports that have our A/Ps. We have some switches with 12.2(52)SE that are using dot1q on trunk ports (A/Ps and SFPs). On one of these switches we have had some wierd issues in the past. We were having an issue using the A/Ps with POE and with Power Injectors. Suggestions?
Don't bother. I'm using both and dot1x is causing the switch to crash. I'd wait until the next version is released because the bug is supposed to be fixed/addressed.
I have the same switch running 12.2(46)SE and experience the same issues as OP. In my environment, a power brownout or blip in building power will prevent POE devices from obtaining full power and the devices will not register properly or fully boot until the switch has been manually cold reset through removing power or CLI command.