cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
12156
Views
10
Helpful
18
Replies

ASK THE EXPERTS - NEXUS 7000 IN VIRTUALIZED DATA CENTER

ciscomoderator
Community Manager
Community Manager

Welcome to the Cisco Networking Professionals Ask the Expert conversation.   This is an opportunity to learn how the Nexus 7000 can help scale and simplify your virtualized server deployments within the data center using technologies such as OTV and FabricPath with Balaji Sivasubramanian.  Balaji is a product line manager in the Data Center Switching business unit of Cisco, focusing on marketing the Cisco Nexus 7000 Series data center switches. He has also been a senior product manager for the Cisco Catalyst 6500 Series Switch, for which he successfully launched the Virtual Switching System (VSS) technology worldwide. He started his Cisco career in the Technical Assistance Center working in LAN switching products and technologies. He has been a speaker at industry events such as Cisco Live and VMworld. Sivasubramanian holds a bachelor of engineering degree in electrical and electronics from the College of Engineering, Guindy, Anna University (India) and a master of science degree in computer engineering from the University of Arizona.

Remember to use the rating system to let Balaji know if you have received an adequate response.

Balaji might not be able to answer each question due to the volume expected during this event. Our moderators will post many of the  unanswered questions in other discussion forums shortly after the  event. This event lasts through December 10, 2010. Visit this forum often to view responses to your questions and the questions of other community members.

18 Replies 18

wu.kristi
Level 1
Level 1

I would like to do VMware vMotion between ESX servers in primary and disaster recovery data centers,  what technologies that the Nexus 7000 have that allows me to do that ?

Cisco OTV (Over-lay Transport virtualization) solution provides you ability to do simple IP based LAN extension for long distance vMotion (see VMware validated paper below).

jmprats
Level 4
Level 4

Hi, we have an old catalyst 6009 in our core. Now, we are thinking in our options to migrate. We have VMWARE Virtualized data center and we want to add an ISCSI SAN, the catalyst 6000 is switching and routing traffic from different floors and between two buildings . What is our best option to migrate? Why should we choose a Nexus? Nexus is it a good option to replace Catalyst 6000 or can we lose switching and routing functions?

Can a Virtual Nexus with a Catalyst 4000 or 3750 stack be another option?

Thanks

Nexus 7000 hw provides super-set of functions over the Catalyst 6500 shipping today.  Software support of features has almost caught up (including MPLS support to be supported shortly).

For today's data center requirements of virtualization Nexus switching is your best option.

Features such as

VDC - helps virtualize the switch itself so you can have up to 4 virtual switches in a single physical chassis for lower over-all cost, collapse physical layers etc

FabricPath - allows you to build large layer 2 network for large scale VM mobility

OTV - allows simple LAN extension over any network (just requires IP connectivity) between data centes for VM movbility solutions.

N2000 support - allows ToR flexibility with EoR featureset and simple managemnet (single point of mgmt for 1500 ports).

Covergence for LAN/SAN  - FCoE and Multi-hop FCoE support and IP SAN support.

More 10GE (512 10GE ports scale) and future 40/100GE capacity makes this an ideal platform for your future DC.

OK, I go to Nexus. But I'm a small enterprise (200 users, 2 buildings) with virtualization and ISCSI. Nexus 7000 looks more than I need, or not? Can I go to a lower Nexus? or do they have less funcionality in switching or routing?

Thanks

You may use the VDC feature to get multiple switches out of the Nexus 7000. You make the single 7000 into 4 separate switches which helps to collapse 2 layers of network. Potentially have the users directly connect to the Nexus 7000 or through Nexus 2000 in one VDC and carve out a another VDC for core/routing functions.

Jon Marshall
Hall of Fame
Hall of Fame

Balaji

Slightly off-topic but i am confused as to the available bandwidth per slot on the Nexus 7K models. With the 6500 it is relatively straighforward ie. on a 6509 the sup720 can provide 40Gbps connectivity to the switch fabric to a linecard. With the new Sup2T coming out it will be able to provide 80Gbps per slot.

With the Nexus 7k i have heard differing figures ie. 80Gbps per slot but then some people have referred to 256k per slot. I believe there are fabric modules that can be used and i have looked at the data sheets but it's still not entirely clear.

Could you explain exactly how much switch fabric per slot the N7k scales to and how this differs depending on fabric modules or not ?

Finally does the N7K have the concept of CFC/DFC forwarding as the 6500 does ?

Many thanks

Jon

Jon,

I'll let Balaji give us the official answers, but I took the N7K architecture presentation in this year's Networkers (session# 3470), and here's what the slides say:

Each fabric module provides 46Gbps per I/O module slot
Up to 230Gbps per slot with 5 fabric modules
Initially shipping I/O modules do not leverage full fabric bandwidth
Maximum 80G per slot with 10G module

We just purchased several N7K's, each w/ three fabric modules.

Only needed two to achieve the 80G / slot bandwidth, but bought one extra for redundancy.

From my understanding, N7K forwarding is in a DFC like fashion as in the 6500's.

Each linecard has its own forwarding engine card, but the sup acts as the central arbitor that grants the linecards access into the switching fabric.

Thanks for the details - much appreciated.

Jon

Nexus 7000 can support up to 5 fabric modules each today with 46 Gbps.   Depending on the number of fabric modules used, the available bandwidth per slot goes up.  If all 5 of them used, you can get up to 230 Gbps/slot from the backplane perspective.  Now we have two types of cards,  M1 10GE based cards is capped at 80 Gbps,  F1 10Gbps modules can go up to 230 Gbps/slot. 

To answer your question, NExus 7000 can support up to 230Gbps/slot (if you use all 5 fabric modules)

N7K uses DFC model - distributed forwarded.

Balaji

Many thanks for that.

Jon

huangedmc
Level 3
Level 3

hi Balaji,

I have some questions regarding multicast through OTV.

It seems site mcast groups (Gs) are mapped to an SSM group range (Gd) (core delivery groups) in the core.

Is there any guidelines or best practices in regards to this part of the OTV configuration?

Is it a one-to-one mapping between Gs & Gd?

If so, do we define the same number of deliver groups as the site groups?

For example, if mcast groups 239.192.0.0/16 exist in both datacenters, should we define the delivery group as:

otv data-group 232.192.0.0/16

Can we use the same exact groups between the two, so that there's no confusion which mcast group is mapped to which delivery group when troubleshooting?

i.e. otv data-group 239.192.0.0/16

We'll obviously have to change the default SSM group range on the switch from 232.0.0.0/8 to 239.0.0.0/8 using "ip pim ssm range" command.

What happens if we have more Gs than Gd?

Can multiple Gs's be mapped into a single Gd?

Also can the control-group fall into the same range as the data-group range?

For example, if we picked 239.192.1.1 as the control-group, can we use 239.192.0.0/16 as the data-group?

hi Balaji,

I have some questions regarding multicast through OTV.

It seems site mcast groups (Gs) are mapped to an SSM group range (Gd) (core delivery groups) in the core.

Is there any guidelines or best practices in regards to this part of the OTV configuration?

Is it a one-to-one mapping between Gs & Gd?

If so, do we define the same number of deliver groups as the site groups?


[bsivasub] Theoretically, a single multicast group could be defined as an OTV data group. As always, the right number of groups to be used depends on a trade-off between the amount of multicast state to be maintained in the core and the optimization of Layer 2 multicast traffic delivery. If a single data group was used in the core to carry all the (S,G) site multicast streams, remote sites would receive all the streams as soon as a receiver joined a specific group. On the other side, if a dedicated data group was used for each (S,G) site group, each site would receive multicast traffic only for the specific groups joined by local receivers, but more state would be kept in the core


For example, if mcast groups 239.192.0.0/16 exist in both datacenters, should we define the delivery group as:

otv data-group 232.192.0.0/16


[bsivasub] Not necessarily.  See above comment.


Can we use the same exact groups between the two, so that there's no confusion which mcast group is mapped to which delivery group when troubleshooting?

i.e. otv data-group 239.192.0.0/16

We'll obviously have to change the default SSM group range on the switch from 232.0.0.0/8 to 239.0.0.0/8 using "ip pim ssm range" command.


[bsivasub] I am not sure that would work. I would suggest you keep them separate.


What happens if we have more Gs than Gd?


[bsivasub] The mapping algorithm is a simple round robin, so there will be some Gd which have more Gs mapped to them.


Can multiple Gs's be mapped into a single Gd?


[bsivasub] Yes.

Also can the control-group fall into the same range as the data-group range?

For example, if we picked 239.192.1.1 as the control-group, can we use 239.192.0.0/16 as the data-group?

[bsivasub] The control-group has to be an ASM group, it cannot be a SSM group.

djsdanish
Level 1
Level 1

I have heard that Cisco has introduced some Simulator for Nexus Switches named as "Titanium". I need to ask how an individual can access that simulator for learning purpose?

Thanks and Regards

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: