Cisco Support Community
Showing results for 
Search instead for 
Did you mean: 

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. And see here for current known issues.

dot1x failiure after ios upgrade to 12.2.(55) SE1


This weekend we have upgraded the ios on quite a few switches on a larger site, the site is a mix of 2960 and 3560 switches and the previouse ios versions were 12.2.44 on most switches but some had an older 12.2.25.

On monday when we came into work we got a call that most of the ports on these switches were an amber color and most people could't use the network.

After some investigation we discovered that we had a problem with dot1x so for a quick solution we just removed it from the switches and restarted all the ports with no dot1x enabled, this solved the problem but we can't really figure out what exactly caused this to happen in the first place.

Our config looked like this:

aaa new-model



aaa group server radius etsdot1x

server auth-port xxxx acct-port xxxx

server yyy.yyy.yyy.yyy auth-port yyyy acct-port yyyy


aaa authentication login default group tacacs+ local

aaa authentication dot1x default group etsdot1x

aaa authorization exec default group tacacs+ local

aaa authorization network default group etsdot1x

aaa accounting dot1x default start-stop group etsdot1x

and on the ports themselves:

interface FastEthernet0/1

switchport access vlan 20

switchport mode access

srr-queue bandwidth share 1 70 25 5

srr-queue bandwidth shape 30 0 0 0

priority-queue out

authentication port-control auto

dot1x pae authenticator

spanning-tree portfast

spanning-tree bpduguard enable

service-policy input PC-PORT-QOS-IN

if anyone could pitch any ideas as to why this might have happened ...

Cisco Employee

dot1x failiure after ios upgrade to 12.2.(55) SE1

You would need to give more details. Were there authentication attempts on the Radius server /ACS ? or the switches were not sending anything at all ?

The dot1x system changed between those 2 releases. The commands starting with "dot1x xxxx" now start with "authentication ...".

Your port config is not bad per se, but it all depends on what you want to do. You do dot1x only right ?no mab ?

dot1x failiure after ios upgrade to 12.2.(55) SE1

Unfortunately I didn't have the chance to look at the logs on the servers yet, they are being administrated by a different department and I couldn't get in touch with them yet. I suspect that the problem was something with reachability of the servers at the time as well but I just wanted to run the config by others in the meantime to make sure I didn't miss something else.

Also, no, we don't use MAB, that's the only dot1x related config we have.

So assuming the servers were reachble, is there any other factor that could prompt this reaction from the switches after an IOS upgrade?

New Member

Re: dot1x failiure after ios upgrade to 12.2.(55) SE1

Do you have any debugs? E.g. Debug radius authentication

Also check that 802.1x is globally enabled - I.e. dot1x system in global configuration mode.

Sent from Cisco Technical Support iPad App

New Member

Re: dot1x failiure after ios upgrade to 12.2.(55) SE1

as mentioned by justin - make sure you have enabled "dot1x system-auth-control" in global config.

Apart from this you stated that "switches were an amber color and most people could't use the network" this means that STP blocked these ports which has nothing to do with dot1x.

So make sure that during an IOS upgrade there is no stp topology change.

Cisco Employee

Re: dot1x failiure after ios upgrade to 12.2.(55) SE1

Not necessarily. If the station fails dot1x authentication, the port LED will also show amber, so it can be anything. Amber light means simply that port is blocked for whatever reason.

Re: dot1x failiure after ios upgrade to 12.2.(55) SE1

Well, I say that it was dot1x since it only worked after we removed the dot1x configuration from these ports and did a shutdown - no shutdown on them.

Also only access ports were affected, and they all have bpduguard on them so all the ports that were transmiting bpdus were working ok so I don't think spanning tree was involved.