if you want to move to 10GE uplinks in your campus the first phase is an assessment of your devices:
you need to find out what hardware and software changes are needed on:
access layer switches
this requires a detailed analysis because for example on modular switches the answer depends on what type of supervisor is installed.
After you build a list of HW pieces that are needed (and you need to include optical transceivers they are not included with linecards or switches).
You can have an exstimate of the money that is needed.
if management accepts the change for the cost you need to deploy new HW switches or linecards.
Before installing modules you may need to upgrade IOS to support them on modular switches like C6500 or C4500.
Then, you need during a maintanance time window to configure the new 10GE ports and to move one access switch at a time to the new links.
Hope to help
Could you pleas be more precise?
Do you need to upgrade the campus uplink ports? at which layer you intend to do those changes? do you require hardware upgrade?
please give us more describtion about your current design and hardware and what you intend to change?
First you might want to validate where 10 gig would be of benefit. Generally it's used where a single gig or multiple channeled gig between network devices doesn't provide enough bandwidth. It can also be used to provide 10 gig to a host.
Depending on your existing hardware, some chassis systems might simply allow addition of a 10 gig card. Others might require additional upgrades to the device before the card can be add. Even more others might require wholesale replacement to support 10 gig.
You also need to consider whether reduced port density might be an issue, or and new issues of device and/or card oversubscription. (For example, 6500 with sup720s provide 40 gig to the slot, but 10 gig cards can be had in 4 port, 8 port and 16 port versions. The latter two oversubscribe the bandwidth to the slot. [There's also PPS performance to consider.])
Also when working with 10 gig, don't neglect the cost of the connectors, and they support the cable and distance needed. (You also need to consider whether the cable will work for 10 gig especially for the distance.)
If you get into much device replacement, you might carefully consider, where possible, using higher port density devices, to reduce some device interconnections, better leverage investment in 10 gig link, and provide possibly higher performance.
(For example, if you have a L2 access edge, and devices host multiple VLANs, and traffic does transit between the VLANs subnets, moving to L3 on the edge should decrease the traffic needed to transit the uplinks.)
Thank your for response.
As of now In active component we have Cisco 6509 in core Cisco Cisco 4506 in distribution and Cisco 3500 in access layer and in passive componenet we have uplinks on 1GB MMF/SMF fiber cable.
Management want to upgrade uplinks between core to distribution switches from 1G to 10G to increase bandwidth.
I want to know what hardware/software upgrade required in core, switch, distribution switches. And in passive component will my existing MMF/SMF fiber cable will work.
We don't have budget problem can replace entire chassis or fiber if require.
Thanks & Regards,
If your primary concern is 10 gig between distribution and core, if your 6509 has a sup720, you should be able to use one of the 10 gig line cards. For the 4506, your options are much more constrained, especially if it's not a -E chassis.
For just 4506 10 gig uplinks, you might be able to use the sup V-10GE (I'm assuming since it's a distribution device, you're routing on it). On a non -E 4506, the uplink or uplinks wouldn't be too oversubscribed since the line card slots only support 30 Gbps (5 slots @ 6 Gbps).
With regard to fiber, you'll need to match the right transceiver with the fiber and distance, for example, see table 1 in: http://www.cisco.com/en/US/prod/collateral/modules/ps5455/ps6574/product_data_sheet0900aecd801f92aa.html
Out of curiosity (and as Ram mentioned, money is not a problem), may I ask why not replace the 4500 with another 6500-E chassis with a Sup720 and run VSS between them?
Leo to satisify your curiosity, I had thought of suggesting 6500s for distribution, but for a couple of reasons I didn't. There wasn't stated, I think, a need for 10 gig to access (where a 6500 would be a better choice for distribution) and often when money isn't an issue, when you provide the estimated cost, it often is.
However, I'm certainly not against replacing 4500s with 6500s. If budget truly allows that much equipment replacement, futher design analysis might be warranted. I've found with the latest equipment, you can easily go 2 tier where you used to need to go 3 tier. For such a 2 tier approach, a pair of 6500s running VSS and with high density 10 gig ports can fan out to quite a few edge devices, some of which can offer high density host gig.
If money is indeed not an issue ...
I have made a curious (hey, I'm still learning here!) post about recommending the replacement of the 4500 with a 6500-E with Sup720 and running VSS.
For access switch, how about using the WS-C3750E-xxPD-SF or EF? That way your access switch can run 10Gb to the core. (So I await for feedbacks from the experts.)
I don't think you require a Layer 2 switch for the servers. But if you happen to require one, 2350 is a 48-port 10/100/1000 BaseTx with 2 x 10Gb ports.
Hope this helps too.