nexus 7k placement

Unanswered Question
Jul 7th, 2008

Dear all,

I needed some help regarding the placement of nexus 7k and 5k switches in DC.

things i require is

1) firstly i wanted to know whether cisco DC 3.0 architecture is available for dload if yes the where is it available?

2)We have like 100 servers in the DC with high utlization, what can be the business case for deploying nexus instead of 65ks switches.

and specially if we have firewall/ips appliances or line card then whats the use of such high Tbps switching if we have a bottle neck at FW or ips appliances.

3)If somebody can proivide me with different deployment scenarios for nexus then it would be a great help.


I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
colin.mcnamara Tue, 07/08/2008 - 10:43

1. - Cisco's data center design zone

2. The main business case for the 7k and 5k is high density of 10gig devices, the future need for 40 and 100 gig uplinks, and future need for consolidated IO architectures.

The 7k has the ability to aggregate 32 10 Gig ports per line card in an over subscribed mode, or 8 10 gigs in a dedicated mode. It is mainly designed for high speed datacenter core / aggregation.

The 5k has the ability to run 40 gig ports (built in) at line rate with the ability to add fiber channel, or additional gig line modules. It is focused towards the top of rack installation. The other cool thing that it can do is serve as a translation point from classic fiber channel, to fiber channel over ethernet, enabling deployment of converged network adapters in your servers.


Tahir Ali Tue, 07/08/2008 - 11:38

thanks colin,

but y do we need such high througput if we have a bottleneck at firewalls/ips devices in DC. and also at wan...since my firewall and ips will limit all this performance gain by nexus or 65ks, how can i design to make it efficient.?

colin.mcnamara Tue, 07/08/2008 - 13:25

Well, you kinda hit that nail in the head. When firewalling and filtering you will always have some sort of choke point.

Though, normally that choke point is between your end users and the application front ends of your servers.

In a converged architecture, the majority of "traffic" will be intra server, and between server and storage. In this case the high bandwidth links become important.

At the end of the day, there is no magic solution to network design. It is mainly a game of trade offs. I think the Nexus series datacenter switches are well positioned if you you are moving towards heavy virtualization and consolidated IO architectures. If you aren't moving towards that, the 6500 series switches should serve you well.


Tahir Ali Wed, 07/09/2008 - 02:56

hehehe Thanks colin for your help....just one last thing....can you provide me with one single data centre diagram which has nexus used and integrated with 65ks fwsm or asa and ips...

A complete DC diagram will help me alot...thanks again.

jschweng Thu, 07/10/2008 - 17:10

I'm wrestling with the same decision of using the Nexus series or 6500-E series switches.

This white paper from Cisco helped a great deal

A typical architecture involves dual 7000 cores fed down to a pair of 6506 Aggregation switches using Virtual Switching technology. Access switches then feed up to the VSS cluster. For the Server farm, I would employ a Nexus 5K switch and connect the SAN to it as well. This all depends on whether you have a need for 10Gig and a unified switch fabric supporting Data, Voice, and potentially Video all on a common infrastructure.

johnsmith10 Thu, 07/09/2009 - 20:15

This helped me also,I'm still contimplating building out some Nexus 5K's with 2148's for our new data center build out.Without going VSS on the aggregate 6509's,and not having but a single 10 GE uplink to the 5K's I worried about the oversubscription on the 2K top of rack Nexus switches.


This Discussion