cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
1908
Views
5
Helpful
8
Replies

VSS upgrade option

gsidhu
Level 3
Level 3

Hello

 

I have a situation where I will not be able to carry out eFSU/ISSU upgrade on VSS due to IOS versions. There is no upgrade path where you can upgrade to interim version(s) before getting to the required version. Therefore eFSU/ISSU upgrade is not an option and this has been confirmed by Cisco.

 

Is there another way that we can carry out the upgrade without causing any downtime.

 

One suggestion is to isolate the VSS (breaking VSL and Dual-Active Detection links, shutting down all of the ports on one switch before upgrading it then repeating this process on the other switch).

Once the isolated switches have been upgraded, reconnect VSL and Dual-Active Detection links to recreate VSS.

 

Two questions for this method:

 

  1. Has anybody done this before and was there any downtime?
  2. If you haven't what do you think will cause downtime?

 

 

8 Replies 8

Leo Laohoo
Hall of Fame
Hall of Fame

Is there another way that we can carry out the upgrade without causing any downtime.

Not a chance.  

Think about it, the supervisor card needs to reboot.  A reboot means downtime.  

Has anybody done this before and was there any downtime?

There is always downtime.  How long the downtime is will depend on how "loaded", the chassis is, the configuration and the IOS version.  Between 7 to 11 minutes is the downtime. 

I agree; there is no getting away from the fact that there will be downtime....

 

  1. Shutdown interfaces on secondary switch 2 connected to servers/switches. Assuming that all servers are dual attached then don’t expect any loss of service. (‘Expect’ being the operative word as it will depend on server OS, NIC teaming configuration...etc
  2. Shutdown the VSL and Dual-Active Detection links on switch 2.
  3. Upgrade switch 2. Reboot will have no affect on production traffic
  4. Now for the next part which is to shutdown all of the interfaces on switch 1 connected to servers/switches and then re-enable all of the interfaces switch 2 that were shut down in step 1. This will definitely result in downtime
  5. Upgrade switch 1
  6. Re-enable the VSL and Dual-Active Detection links on switch 2.
  7. Re-enable all of the interfaces switch 1 that were shut down in step 4

 

I think there is a possibility of further downtime steps 6) and 7) as Supervisors synchronise MAC addresses, form Port Channels...etc. In theory it shouldn’t happen but there may also be dependencies on server OS, NIC teaming configuration...etc.

 

As I mentioned this is a suggestion which I thought would be worth exploring and comparing with traditional non-eFSU upgrade. In both cases there will be downtime. I cannot think of any other method that will result in no downtime or less downtime that the traditional method.

Thanks for your response.

 

 

 

Uhhhh ... Your steps are extremely disruptive and can take double the time. 

Go HERE.  I have presented two different options to use aside from ISSU or FSU/eFSU.

yes I meant to say that the traditional 'RPR mode' upgrade is what I put forward to customer which as you know forces downtime as all the line modules get reset.

What I described above is a suggestion from a colleague which could potentially reduce the downtime/disruption compared to traditional RPR mode. (The situation is that customer have services/applications that need to be available 24/7). However it seems more complicated and more risky.

BTW I clicked on the link and read your options which I though were worthy of 5 points.

I personally will not be inclined to reboot the whole switch as per your post.....

Instead (and as a alternative option) I will

1) reset the standby supervisor (slot 2/5) so that it loads the new image...'hardware module 2/5 reset'.....AFAIA this should not reset any of the other modules.

2) issue 'redundancy force-switchover' which will reset the supervisor in slot 1/5 and the rest of the line modules. This is where I am hoping that the downtime will be less than 15 minutes.

2) issue 'redundancy force-switchover' which will reset the supervisor in slot 1/5 and the rest of the line modules.

I don't believe this will work.  Why?  Because with #1, the supervisor card has rebooted into a DIFFERENT version of IOS.  So this means this card will be unable to join as a redundant card, rather, this card will boot into ROMmon.  This also means that to recover a card in ROMmon requires human intervention.  

Thanks for pointing this out.

The process that I described is what I would use for upgrading a single Catalyst 6500 with dual Supervisors. I wasn't able to find any 'clear' instructions for a non-eFSU/ISSU VSS upgrade so I assumed it would be the same.

Just out of interest when you say:

So this means this card will be unable to join as a redundant card, rather, this card will boot into ROMmon

Are you speaking from experience?

I would appreciate comments from anybody else reading this thread who have run into this issue

Are you speaking from experience?

Yes.  

This is a common behaviour.  Search the forum and you'll see this is a common topic.

Thank you very much for your help and quick responses.

Review Cisco Networking products for a $25 gift card