2. The main business case for the 7k and 5k is high density of 10gig devices, the future need for 40 and 100 gig uplinks, and future need for consolidated IO architectures.
The 7k has the ability to aggregate 32 10 Gig ports per line card in an over subscribed mode, or 8 10 gigs in a dedicated mode. It is mainly designed for high speed datacenter core / aggregation.
The 5k has the ability to run 40 gig ports (built in) at line rate with the ability to add fiber channel, or additional gig line modules. It is focused towards the top of rack installation. The other cool thing that it can do is serve as a translation point from classic fiber channel, to fiber channel over ethernet, enabling deployment of converged network adapters in your servers.
but y do we need such high througput if we have a bottleneck at firewalls/ips devices in DC. and also at wan...since my firewall and ips will limit all this performance gain by nexus or 65ks, how can i design to make it efficient.?
Well, you kinda hit that nail in the head. When firewalling and filtering you will always have some sort of choke point.
Though, normally that choke point is between your end users and the application front ends of your servers.
In a converged architecture, the majority of "traffic" will be intra server, and between server and storage. In this case the high bandwidth links become important.
At the end of the day, there is no magic solution to network design. It is mainly a game of trade offs. I think the Nexus series datacenter switches are well positioned if you you are moving towards heavy virtualization and consolidated IO architectures. If you aren't moving towards that, the 6500 series switches should serve you well.
A typical architecture involves dual 7000 cores fed down to a pair of 6506 Aggregation switches using Virtual Switching technology. Access switches then feed up to the VSS cluster. For the Server farm, I would employ a Nexus 5K switch and connect the SAN to it as well. This all depends on whether you have a need for 10Gig and a unified switch fabric supporting Data, Voice, and potentially Video all on a common infrastructure.
This helped me also,I'm still contimplating building out some Nexus 5K's with 2148's for our new data center build out.Without going VSS on the aggregate 6509's,and not having but a single 10 GE uplink to the 5K's I worried about the oversubscription on the 2K top of rack Nexus switches.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...