10-16-2007 11:24 AM - edited 03-05-2019 07:07 PM
Hello,
I have a 6509 (non E chassis) and have a need for three WS-X6748-GE-TX cards. As far as I can tell, I need the following:
2x Sup720-3BXL
2x WS-CAC-3000W
3x WS-X6748-GE-TX
5x WS-X6348-RJ-45 (for my other users)
And what about the fan? This switch currently has a WS-C6K-9SLOT-FAN2. Will this fan cool this configuration?
No PoE will be needed. All users are PC's.
Anything else I need to order? This switch will be under a fairly moderate/heavy load. Will this setup offer good GigE performance?
Is there anything else I need to worry about? I.e. Switching fabric configuration etc...? Stability and throughput are my highest prioprities.
Thank you.
Solved! Go to Solution.
10-16-2007 01:04 PM
Glad to help and appreciate the rating.
DFC's will be useful to you if you have a lot of traffic between devices that are on the same module. So if you have 30 servers on the same 6748 module that send a lot of traffic between each other then DFC's might be worth considering. But as soon as traffic needs to transfer between modules then you still need to use the switch fabric connection.
Jon
10-16-2007 11:59 AM
Hi
You will be fine with the FAN2 which is the fan you need to run the Sup720 for the non E chassis.
One thing, could you clarify whether you are talking about one chassis or two because the number of modules you mention = 10 so you don't have enough slots in a 6509.
As for performance, that is quite complicated because the 6500 is quite complicated :).
You don't say what the 6748 modules will be used for ie. servers / users.
The 6748 modules are fabric enabled which means they have a dedicated connection to the switch fabric - 2 x 20Gbps to be precise. You don't say whether you are looking to use DFC's (Distributed forwarding cards) on these modules ?
The 6348-RJ-45 modules are not fabric enabled which means as soon as you run these in the same chassis as the 67xx modules the 6500 has to go from compact to truncated switching mode - put simply it's throughput is reduced. Whether or not this would have an impact within your environment is difficult to say. We run servers on 6748's which also have non-fabric enabled modules in the same chassis and they are fine. In some locations we would look to only utilise fabric enabled cards in the chassis.
In terms of the performance of the 6348's as they are not fabric enabled they must use the shared bus which is 32Gbps on the 6500. So all your 6348 modules have to share a connection to the switch fabric. However this is not as bad as it sounds because your 6348's only support 10/100 connections so you are unlikely to overload the shared bus.
Attached is a link to 6500 architecture papers which may help explain some of the finer points.
http://www.cisco.com/en/US/products/hw/switches/ps708/prod_white_papers_list.html
Hope this makes sense
Jon
10-16-2007 12:13 PM
Ah sorry, that should have been two 6748's! Thanks for the correction!
This will be for one chassis.
The 6748's will be used for a mixture of servers and hosts. One of which is a monster RAID array. Hosts will remote boot from this RAID. Most of the hosts will use thie GigE to boot from and not use it all that much.
Should I look into getting the DFC's? I've read about them a bit. My switch is a very simple setup of vlans, no access lists and no routing protocols (static routing).
I appreciate the help! thank you!
10-16-2007 01:04 PM
Glad to help and appreciate the rating.
DFC's will be useful to you if you have a lot of traffic between devices that are on the same module. So if you have 30 servers on the same 6748 module that send a lot of traffic between each other then DFC's might be worth considering. But as soon as traffic needs to transfer between modules then you still need to use the switch fabric connection.
Jon
10-16-2007 01:09 PM
This is perfect, thank you!
10-16-2007 02:21 PM
Power. I would run a power calculator.
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: