Welcome to the Cisco Networking Professionals Ask the Expert conversation. This is an opportunity to learn how to deploy a Catalyst 6500 Virtual Switching System 1440 with Cisco expert Balaji Sivasubramanian. Balaji is a senior product manager in the Cisco's Campus Switching Systems Technology Group. Balaji is involved in defining product requirements and marketing of the next-generation products and features on the Cisco Catalyst 6500 Series Switch. One of his key accomplishments is the successful launch of Catalyst 6500 Virtual Switching System 1440 technology which was well received in the industry. He is a coauthor of one of the best selling Cisco Press books "Building Cisco Multilayer Switched Networksâ and has authored and reviewed many articles on Cisco.com.
Remember to use the rating system to let Balaji know if you have received an adequate response.
Balaji might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered questions in other discussion forums shortly after the event. This event lasts through June 27, 2008. Visit this forum often to view responses to your questions and the questions of other community members.
As far as I was able to find on cisco.com website, VSS-1440 only supports a number of 128 MEC's, which is rather low for a server farm for example. I am thinking of implementing this system, but I would need more than 128 ether channels. Can you please let me know when is this number going to be increased? Do you have an estimate of when the new firmware will be out containing this increase? Can you direct me to a link where I can find details about future plans with firmware scaling?
Hello, this restriction will be removed in 12.2(33)SXI which will be available in next 2-3 months timeframe. You can do however many MEC in this release.
I am looking at replacing two notel 8800 Switches with 2 6513 catalysts with each chassis as a complete hardware backup to the other.
This is intended to be used as core switch where all the other Wiring closet 3750's run to each with 1 SFP slot running to a 6513 in EtherChannel.
Basically Etherchannel running between Stacked 3750's and the "virtualised" 6513'S
ie from port Gig1/0/1 on a 3750 to port Gig 1/0/1 on the 1st 6513 and then Port Gig5/0/1 from a stacked 3750 (etherchannel) to port Gig1/0/1 on the 2nd 6513.
So this is what I have thought and having read about the Virtualisation that the 720 can give that my idea would now work.....
Any ideas on this ? Is my thinking correct ?
How many PSU's/Controllers etc would be needed per chassis for this idea to work?
3x 48 GigEthernet ports (Copper)
1x 48 Gig Fibre (SFP?)
in each chassis
Could I also get away with a smaller Chassis ?
I have until now also found no documentation on how this is configured (it is not like HSRP, a standard protocol)
Thanks for all help.
You can go with 2 of 6509-E/6506-Es , WS-X6748 copper and SFP version.
Configuration is similar to configuring a single chassis (after coverting to VSS mode)
You can check the WP and config guide for how to covert to VSS
Is VRF Lite supported on Virtual switch 6500 (VSS) ?
If not, is in the roadmap?
Couple of questions as we contemplate the use of VSS.
Considering direct server connectivity and it having either an etherchannel or teamed NIC connection (one to each switch).
If operating in VSS mode is there any driver in having dual redundant supervisors any more?
If some servers are not dual attached does a dual supervisor offer increased reliability to the switch it is connecting to?
If the answer is yes then when will dual supervisors be allowed in VSS mode?
Pat, Answer depends on whether how long you can live without one of the NIC connection.
For dual homed servers, dual supervisors may still be needed in case you can't afford to have one of the link down on the servers for say less than sec.
If you are primarily using it as a back up link today. With VSS you can use both the links say 99.999 %, then you won't miss the requirement for dual sup.
Having said that, we are considering dual sup support for a future release.
Thanks for your fast response. I do however have one more question. Say I want to migrate to VSS in two phases, and in the beginning, for one-two months use a 6k9 chassis, with two sup 720 VSS capable but not enabled. Is it possible to use two redundant supervisors, vss capable, on the same chassis, if the VSS is not enabled on them, and then once the second chassis arrives and is in place, just move one sup, and actually enable VSS?
Yes you can run the VS-S720-10G supervisor as a standlone regular mode and in that case all modules/including WAN/service modules/dual supervisors are supported
Does the VSS 1440 feature allow clustering of more than two physical chassis?
If not, is in the roadmap?
To add to this and give a scenario that we would have ....
In a redundant Data Centre scenario (1 km apart) say you have 2 server nodes in a cluster, one in each location, and the servers only allow for copper connectivity (100 metre limit).
I need each server to have a dual connection to a VSS switch complex - access layer. This requires me to have 4 switches between the 2 Data Centres. Since VSS operates with 2 switches only and I need the same VLAN on 4 switches how would you propose they communicate with redundancy. Will I be relying on STP again. If the same switches do L3 then there is the added complexity of adding back in HSRP.
you can connect both the VSS cluster together. you would be okay as you can see below.
server1 --VSS1 --------VSS2 ---server2
Agg1 -L3- Agg2
If you are taking VLAN 4 to the agg layer, make sure you have L3 between the agg layer.
Let me know if this would work
Today you can only cluster 2 switches. architecture allows more than that. We are looking at how to do it and how redundancy would work in such a scenario etc.
Two more qeustions:
1). I currently have WS-SUP720-3BXL in my chassis, can this run VSS, do I have to upgrade with a WS-F6K-PFC3CXL or should I replace with VS-S720-10G-3CXL to allow VSS?
2). Can I use the Application Control Engine Module (ACE20-MOD-K9) in the two chassis with VSS enabled?
1) you need the VS-S720-10G-3C or 3CXL for VSS
2) ACE will be supported in 12.2(33)SXI in 2-3 months timeframe.
Another question ...
65xx with VSS only supports 67xx line cards.
6513 chassis does not support 67xx line cards in slots 1-8 (20Gig vs 2x20Gig in slots 9-13).
Therefore VSS does not make sense in a 6513 chassis?
6513 do support 6724-SFP and service modules (there may be exceptions) in the top slots
Having said that, 6509-E or 6509-V-E would be a better choice unless you have lot of service modules or 6724-SFP.
We have 2 x 6500 switches in VSS mode, connected via 10G VSL, with servers that are connected via multichassis etherchannel.
We need to add other servers with dual interface using other technologies for redundancy (clusters, sun IPMP, etc).
Does the existing 10G interface support extended vlan in etherchannel trunk mode for this other technologies or do we need to reserve independent interfaces ?
Thanks in advance for any information.
VSL carries user traffic as well so you don't need a separate links just for cross-chassis traffic.
2 VSS clusters are interconnected by a MEC of 2x10Gig links in 2 different sites.
------ MEC ------
The MEC is passing on a DWDM lambda and the distance is 40 km.
The MEC is a 802.1q L2 trunk that carry more Vlans. Lets focus on the Vlan 400 where servers are interconnected with a default-gateway .1
I implement GLBP or HSRP between the Router_VSS1 and the Router_VSS2.
How can I be sure that the traffic is routed locally ? Is there a trick to do it ? Is a future release that can modify the HSRP behaviour.
In fact in worst case a server located in site 1 on vlan 400 has to go through the MEC link (40km where is the active HSRP) to be routed and has to go back through the 40km MEC to communicate with a server located in site 1 on Vlan 500, for example.
How can we avoid this situation ?
. Configure an IP address in the same VLAN per VSS cluster and have the local servers point to that. you have redundancy right there if one of the switch goes away. what is the point of configuring HSRP on top of that. If the whole cluster goes away, well your servers go away anyway.
Am I missing something ?
I have two 6509-E switches running VSS. I am being asked if this supports Cisco web browser user interface. If it does is there anything different in the config for this to be enabled? and if it is not supported will it be in the future?
Sorry I mis-worded my question in the earilier.....I did not mean the Cisco web browser user interface, I want to see if the CiscoView Device Manager is supported in the VSS configuration.
I'm preparing for a data center deployment of VSS and was wondering what the plans were for ISSU?
ISSU is planned for 12.2(33)SXI (2-3 months). You will get almost hitless upgrades with sub 200 hit.
Very good news. It sounds like there will be a real maturing in 12.2(33)SXI. I know that more of the service modules will be supported. Can you speak to that a little bit? We're currently planning on moving all of our FWSM/CSM/ACE to a service layer. This plan really gained steam as we could not use them with VSS. We're pretty fond of the idea now and future support in VSS probably won't change our mind. Still I am interested in whether FWSM/CSM in a VSS pair will simply behave like a pair in a single chassis today or would there be any additional benefits to running services under VSS?