cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
8712
Views
0
Helpful
21
Replies

Last reboot reason: Operator changed 11g mode

remysyaku
Level 1
Level 1

We have some AP keep on disassociating. All together we have 21 APs in the building but only this 7 APs keep on disassociating in every hours.

Below is the log i grab from our 5508 controller.

Last reset reason: operator changed 11g mode

Our controller code is 7.0.116.

21 Replies 21

Scott Fella
Hall of Fame
Hall of Fame

The only time I see that is when I'm making changes to the radio. I don't think you are doing this though. When you say disassociating, you see this in the ap itself correct. On the wlc when you view your AP's... Wireless tab and click on an ap, do you see the that the ap has a high uptime and the join time is low? Do you have another wlc that the AP's are joining?

Thanks,

Scott Fella

Sent from my iPhone

-Scott
*** Please rate helpful posts ***

I didnt make any changes at the radio when i saw the log is reporting that issues. The strange thing is i can see the uptime is high but the join time is low. Keep on disjoin every hours. What might cause this problem? Upgrading the code will eliminate the issues? Currently we only have 1 wlc.

Please advise.

Thanks!

Is this a new install or do it just start happening?

Well you are on 7.0.116.0 which isn't a bad code. If you wanted to upgrade, you should use 7.0.220.0 which I have been using lately. You are sure that you only have this problem with specific AP's and only those 7? If downtime isn't an issue, then maybe upgrading to 7.0.220.0 might fix it.

I would also monitor the ports on the switch. If you see errors, it might be due to cabling. You can always swap a good ap in the problematic ap cable drop to see if the good ap has issues. If so, then you know it's a cable or patch cable issue.

Thanks,

Scott Fella

Sent from my iPhone

-Scott
*** Please rate helpful posts ***

This is just happen within this few days.

For now we cannot upgrade due to our other AP is already on production. Yes. Only the specific AP got this issues.

Ok. I will try to swap the good ap to the problem ap and see what will be the result.

Thanks. Will be back once i tested all this.

If there is other solution for it, please post.

Right now you want to eliminate the cabling since you know which AP's are faulty. Swapping the AP's will eliminate either the cabling or the ap. Since you only have one wlc and only 7 are having issues all the time, you sort of eliminate the wlc from that. Are these 7 on the same switch?

Thanks,

Scott Fella

Sent from my iPhone

-Scott
*** Please rate helpful posts ***

Scott Fella
Hall of Fame
Hall of Fame

You can run a TDR test from a supported switch also. Leo has some good threads on this forum, just do a search on "TDR" and look for Leo:). But here is the command from one of his post.

test cable tdr interface

Sent from Cisco Technical Support iPhone App

-Scott
*** Please rate helpful posts ***

Seem like our switch cisco ws-c3560v2-24ps didnt support TDR.

Yes. Our problems AP is under the same switch.

Cn the uplink port cause this failure? Sometime i tried to ping to the switch mgmt IP, its will respond after 2 unsuccessfull ping.

That is the problem I bet... If you loose connectivity to the switch, the AP's in that switch will loose connectivity to the wlc.

Thanks,

Scott Fella

Sent from my iPhone

-Scott
*** Please rate helpful posts ***

The TDR doco can be found here

Unfortunately, the 3560 you have is a FastEthernet variety and this model does not support TDR.

Can you please upgrade your WLC firmware to 7.0.220?

Justin Kurynny
Level 4
Level 4

Remysyaku,

Ran into this issue in the lab today, did some searching and found your post. I have a pretty bare bones setup, so after a little searching through my own logs and inspecting configurations, I think I may have spotted the cause (at least generally).

Frequent disassociations by lightweight APs will definitely occur when you have periods of congestion in your network (and without QoS mechanisms for network control traffic). During the congestion, CAPWAP control gets starved out and the APs can't maintain heartbeat to the controller, so they disassociate and try their controller list again. They don't reboot, they just start hunting. This is why you will see long uptimes and short association times.

My issue? I had a bridge loop in my lab. I'm messing around with autonomous bridging and I have both my root and non-root bridge wired into the same switching infrastructure. Despite being careful about which vlans are defined on the bridge switchports, I'm still seeing frames leak across the bridges and the network switch is spending a lot of time processing the looping packets--which are apparently enough to disrupt normal traffic flows, but not enough to overrun the switch CPUs entirely.

Here's what I'm seeing in my switch logs:


Here's what I see in my controller SNMP trap logs:

Notice how I'm seeing the same "Operator changed 11g mode" that you saw. I haven't changed anything on the controller all day, so I'm not sure what's triggering that error--whether it's somehow accurately reflecting what the AP thinks is happening or if it's coming up erroneously. Either way, I think looking for administrative changes on the WLC radio settings as the cause is probably a dead end on being of tshoot use.

Check your switch logs and look for network congestion between your AP and controller. I think you'll find that the AP association is getting torn down due to control-channel traffic getting starved out.

Justin

Justin:

Thank you for your info.

I am having the same issue with some of outdoor APs. I am using 7.0.116.0.

The problem with 1520 APs that I have is It should always be as a RAP because it is connected to wired side. But sometimes there is something happen (possibly with the switch just as you described) and it falls back to MAP to join the WLC using the virtual-radio interface through another 1520 AP. When this happens the AP will discard taking the wired side into consideration for 15 minutes. During those 15 minutes the wired side is not used.

If I have more users connecting to the AP they will feel problems because

I was able to see this message mentioned here in the AP logs.I could not find any valuable explanation.

I think Cisco should consider this messgae and maybe changing it to some better format than metnioning a change in 11g raido change!!!

Thanks a lot again.

Amjad

Rating useful replies is more useful than saying "Thank you"

Saravanan Lakshmanan
Cisco Employee
Cisco Employee

AP model, mode.

AP#show log

AP#more event.log

capwap error, event from AP and wlc

msg and trap log from wlc?

are b/g radio enabled on wlc?

if enabled what data-rates are on.

typical situation where AP reboots with Operator changed 11g mode:

AP switching between connected to standalone if b/g is disabled - bug

AP jumping between WLCs that has g mode enabled on one wlc and not on other - misconfiguration

Hello Saravanan,

Those information you are asking for is hte same what TAC usually ask for and when you send them the output they either do not find anything and ask you more informaiton or they send you unjustifiable explanation.

I think TAC do not know what the exact thing they look for when they ask for those output and they ask for it "hoping" to find something; but they don't know what it could be.

The explanation for the message you mentioned one of them fits only for HREAP APs because locally connected APs can't switch between connected and standalone. right? no one mentoined HREAP when this message appears.

The other explanation is totally not valid because all my switches have same config for g band and the AP is obviously (from logs) did not move to any other WLC.

Rem who is having this discussion opened have only one WLC.

So I think the message is misleading when it points the finger to 11g mode. 11g is innocent as per jury who had this message appearing on their APs.

Thanks.

Amjad

Rating useful replies is more useful than saying "Thank you"

FRALEY
Level 1
Level 1

PROBLEM: We also had multiple APs rejoining the controller throughout the day.  The rate of rejoining would increase when the APs were busiest.  We discovered the problem had to do with feeding an AP from a POE capable Cisco switch port while using a 'brick' to power the AP.

SOLUTION:  turn off POE on the switch port 'power inline never' if the AP was powered with a 'brick'.

Here's the log output from a Cisco switch showing what was taking place:

287504: Sep 20 09:01:44.541: %ILPOWER-7-DETECT: Interface Gi0/47: Power Device detected: IEEE PD

287505: Sep 20 09:01:45.565: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi0/47: PD removed

287506: Sep 20 09:01:46.605: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/47, changed state to down

287507: Sep 20 09:01:48.618: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/47, changed state to up

287508: Sep 20 09:01:59.993: %ILPOWER-7-DETECT: Interface Gi0/47: Power Device detected: IEEE PD

287509: Sep 20 09:02:01.016: %ILPOWER-5-IEEE_DISCONNECT: Interface Gi0/47: PD removed

287510: Sep 20 09:02:02.048: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/47, changed state to down

287511: Sep 20 09:02:04.061: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/47, changed state to up

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: