Welcome to the Cisco Networking Professionals Ask the Expert conversation. This is an opportunity to get an update on the Cisco Catalyst 4500 series switch which is a midrange, high-performance modular platform offering secure, flexible, non-stop communications with Cisco expert John Bartlomiejczyk. John has been with Cisco for 12 years as a system engineer, technical marketing engineer, and product manager, most recently for the Cisco Catalyst 4500 Series. He holds CCIE certification #3987 in Routing and Switching. He has given presentations at Cisco Networkers and CCIE Summit meetings around the world. John has more than 20 years of experience in the interworking industry.
Remember to use the rating system to let John know if you have received an adequate response.
John might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the unanswered questions in other discussion forums shortly after the event. This event lasts through May 2, 2008. Visit this forum often to view responses to your questions and the questions of other community members.
The 4500 E-series chassis are the chassis required to support the 24 Gigs per linecard slot capability provided by the Catalyst 4500 Supervisor 6-E when used with the Catalyst 4500 E-series linecards.
The 4500 E-series chassis use all the currently shipping 4500 Power Supplies and also support the 4500 Classic 6 gig per slot Linecards and Supervisors. When used with the Supervisor 6-E, one can mix and match 24 gig E-series linecards with the classic 6 gig linecards with each running at their respective capacities. For ease of migration, of course all classic Supervisors and Classic Linecards are supported.
EW code is based off the IOS "E" train..several years ago to provide more feature consistency among the Cisco Switching Family, all switching platforms IOS releases have been based off the IOS "S" train. For the Catalyst 4500, the first S train based release was 12.2(25) SG.
Yes..IGMP v3 is supported in both trains and has been supported since 12.1(20)EW. In general, if you want to find out when a feature is supported or was initially supported, the IOS Release notes will provide this:
Being a Catalyst 4500 PM, I can only speak for the Catalyst 4500 and we are exploring support for IEEE802.an in the future
Will the 4500 ever support the Virtual Switch architecture? I would be interested in running the multi-chassis EtherChannel in my data center, between my top-of-rack switches and my two distribution 4506s.
its being investigated but there are no commited plans at this point. Please note the 4500 may be used as a VSS client to the 6500
I observed that NetFlow services are not at all supported on SUP VI, while is supported on SUP V and SUP IV.
Any specific reason behnind taking out this support?
Or are there any palns to get this integrated in SUP VI.
Its a function of the hardware on the Sup 6-E which has a lot of great capabilities such as advanced QoS, IPv6 in HW..support for 24 gig per slot etc....however..no Netflow.
What we are recommending is to have Netflow enabled on your distribution layer Switch when using the Sup 6-E in the access.
Thanks for the clarification.
However for me or a customer it doesn't make any sense to remove an existing feature which is really useful.
Hope to see this in the next switch.
Thanks a lot John.
I would like to do a redundancy between my two 4506 switches. So, i did a desing (on attachemnt) and i'd like your opinion about it.
My goal is provide redundancy using HSRP. So, in this topology, how will hsrp communicate?
1) should I link both core switches directaly? or this way this work?
2) should I link the switch 01 at core 01 and the switch 04 at core 02, or link the switch 01 on core 01 and 02?
anyway, how is the best way to I do that?
If I understand your diagram correctly, it is not best practice to link Access layer Switches directly to each other with a single uplink to a dist switch..lots of Spanning Tree Issues . In general, the access layer switches should have dual uplinks one to Core-1 and the other to Core-2. There should be a link between Core-1 and Core-2. This direct link enables direct HSRP keep alives between Core-1 and Core-2 and also creates a triangle between the access layer switch and the 2 Core switches. This triangle will enable you to run quick convergence features like BackboneFast.
If you have SPT turned off completely, than the direct link between the Core switches is a direct link for HSRP hellos. If you are running Layer 3 links..you may want to consider GLBP for balancing. this is supported on the 4500 Supervisors since 12.2(40) SG IOS release. finally , there a few HSRP design references you may want to check out:
please, look my new diagram. Is that all right now?
Should the link between the cores to be trunk? I have many Vlans on both
I didnt receive your attachement but I will assume it adheres to the best practices, I pointed out in my earlier response..also please check the links I provided in that response.
This question is really more network design related rather than cat 4500 ...so you may want to check all of the network design best practices available on cisco.com
With that being said, if you are using HSRP..the HSRP keep alives and hellos are sent across an L2 link..so yes..there should be a trunk between the 2 core switches. You may also want to consider using passive interfaces to reduce the number of routing peers since with many vlans, there will be 2 routing peers per VLAN for the core switches.
We are planning to migrate from one core to double core (we have core and distribution layer together) in addition we have to move the server room. For now we will use just one gateway to outside - firewall connected to Core1. To make this migration as fast as possible I came to the conclusion that I will configure HSRP for the VLAN interfaces and put Core1 in the old server room and Core2 in the new server room, connect them together since we have a connection between both rooms. With this solution I can move one server in the time without taking down the whole network. The tricky part here is the configuration of the link between two Core switches. I read a lot of documentation. The best practice advice is either to get the layer 3 link between two core switches, or in case of layer 2 link to increase the spanning tree cost on the secondary core interface facing Core 1.
I did some testing with 4503 and 3750 and for me the best solution is just to have layer 2 link between them without any improvement to the spanning tree setting. Could you please comment on this architecture? I have tested it and I'm pretty sure it is the best solution for us, but all best practice documents advise to avoid this solution from what I see.
My past experience when I was an SE and TME..was if you are running Layer 2 between the Acces and Dist with dual UL from Access to each Dist Switch, you should use a L2 link between the Dist switches to avoid a potential Figure 8 SPT loop and create a triangle so you can use fast convergence features like Backbone Fast or the industry standard IEEE fast convergence features. You do want to make your path to the primary HSRP consistent with the active L2 link, cost adjustment will let you do that, otherwise there could be a circuitous path to get to the primary HSRP switch via the secondary HSRP switch
Thanks for the answer. Actually I have tested the layer two link between the distribution switches with the cost ajustement on the core2 interface facing the core1
(spanning-tree cost 2000),
then I disconected some links from the access switch, I kept pinging the outside gateway and watch the result. The way the path has converged was; with this cost adjustment I had a couple of replays from the gateway on the core before the outside gateway started to reply. Without the cost adjustement I just have lost one or two pings and got the outside gateway reply.
Do you thinks there is some risk in implementing layer 2 link between two distribution switches without the cost adjustment on the interface in question?
Or is there something wrong with the way I have implemented the cost adjustment.
pinging in a lab and having actual traffic going through a switch is different : )
In general, you would want say your even VLAN's using one HSRP switch as primary ( with the other switch as back up) and then the odd VLAN's mapping to the other HSRP switch ( with th other as primary).
This will also aid in any troubleshooting since everything is deterministic and you know what will happen in case of a network fault.
I'm not sure my explanation were good enough.
I'm not questioning the whole HSRP, primary or secondary switch. It is about the link connecting two core switches (or distribution switches, since we have just access and core)
SInce HSRP state is sent along a L2 path..the most direct link would be a trunk between the core switches
We had an interesting problem when attempting to implement DHCP Snooping on our 4500 switches.
We store the snooping database file in flash as opposed to a TFTP server so that it is immediately available in the event of a switch reboot or other network event. On the 4510 switches everytime it updated the flash file, it would create a new copy of the file, and 'mark' the old file for deletion - but not actually delete the file. This means that after a time the flash would become full of all these DHCP snooping files 'marked for deletion' but not actually deleted. This causes the switch to reboot, and then get stuck in a reboot cycle. We opened a TAC case and were told this was expected behaviour and that we shouldn't store the file in flash. The only way to clear these files was to manually 'squeeze' the file system.
This behaviour is different from a 3560, which actually deletes the old flash file everytime it adds a new entry.
Are there any plans to modify the way a 4500 switch stores its DHCP Snooping binding database so that local flash can be used? Because of this issue, only half our campus (3560s) can use the snooping, while the other half (4510s) cannot. Needless to say, manually squeezing the filesystem on each 4510 every 'x' days/hours is not an option, especially if a failed/forgotten squeeze will cause the switch to reboot/fail.
It actually has nothing to do with the way the DHCP binding DB is written, rather it's the filesystem that the 4500 is using. The 3560 uses a different file system, more like FAT, and so the deletion/overwrite behavior is different. Basically, if the "squeeze" command is not present in the priv exec command set then you can write over and over without consuming all the space in flash. It also means that you cannot undelete files. There is a file system that doesn't use the squeeze command and doesn't dynamically free space, but that's outside the scope here.
I recommend using a TFTP/FTP/SCP server to store your bindings table.
Response from our DE team
The issue is not applicable to the 4500 Sup6-E where files get deleted automatically - but it's not the case for Classic 4500 Supervisors.
Deleted filies stay in bootflash requiring a squeeze. There are currently no plans to enable the cleanup of
these deleted files.
The other aspect of this problem is the property of flash. we recommend against using flash for storing bootflash - no matter
the platform. Flash inherently ages out - so that after about few "erases" of flash sectors (number varies from around 10K to
100K or even more...), the sectors lose their ability to store data - and that's obviously not good!
Is there another SFP blade option for the 4500 instead of the 48 port one? I dont need that many ports, but cant find anything else.
Perhaps the blade WS-X4506-GB-T would suit your needs. This blade provides six of those 'combo RJ45 / SFP' ports. We've used them extensively without any issues.
Datasheet for it (amongst all the other line cards) available at:
Is there another option for an SFP blade for the 4500 instead of the 48 port one? I needed something with fewer ports.