Cisco Support Community
cancel
Showing results for 
Search instead for 
Did you mean: 
Announcements

Welcome to Cisco Support Community. We would love to have your feedback.

For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.

New Member

Features of Nexus 7000 series - load distribution

I am looking at a network architecture where I want to aggregate 100 10 Gb/s ethernet flows (line rate assumed) into 10 100 Gb/s ethernet flows for data center exit. Can the 7000 be set up to distribute packets incoming on the 10 Gb/s ports to the 100 Gb/s ports with some sort of fairness algorithm? I imagine could just create network groups (you 10 10 G ports go to this 1 100 G port) but I was hoping for some greater flexibility.

Thanks for sharing any thoughts/info you have.

Everyone's tags (4)
4 REPLIES
Silver

Re: Features of Nexus 7000 series - load distribution

Kevin Buchs wrote:

I am looking at a network architecture where I want to aggregate 100 10 Gb/s ethernet flows (line rate assumed) into 10 100 Gb/s ethernet flows for data center exit. Can the 7000 be set up to distribute packets incoming on the 10 Gb/s ports to the 100 Gb/s ports with some sort of fairness algorithm? I imagine could just create network groups (you 10 10 G ports go to this 1 100 G port) but I was hoping for some greater flexibility.

Thanks for sharing any thoughts/info you have.

That's a...lot of data.

I'm not sure you'll get a Nexus chassis/card combination that'll support 100 x 10 Gb/s line rate links - OK, you could in a 7018 with 13 N7K-M108X2-12L modules (8 ports per module, total 104 x 10 Gb/s ports), but you won't be able to run this *and* 10 x 100 Gb/s ports - the 100 Gb/s ports only come in a two port card (N7K-M202CF-22L), and by the time you allow for Sups (2, in a chassis like this!), you're going to run out of slots (13 for your 10 gig ports, 2 for your sups, only leaves 3 slots for 100 Gb/s cards - which is only 6 100 Gb/s ports.

Also, depending on what you mean by "aggregate", you'll also run into channel-group limitations. From memory, you can only put 8 ports into a single port-channel - which would mean you'd need multiple port-channel groups to deal with your 10 Gb/s ports.

Then you get to the load-balancign algorithm - which isn't really "load balancing". You could conceivable flood one of your 100 Gb/s ports while leaving the other 5 running with nothing, simply because the flow being shifted is all put across the same link via LACP - this is maybe over simplified, but my point is there is no guarantee that you'll get full utilisation of all the 100 Gb/s ports even if they're all in the one port-channel.

I'm not sure the Nexus would be the right platform for moving this kind of data based on what you've said. You might need to look at an ASR9000 (ASR9922 with 6 x A9K-24X10GE-SE 24 port 10 Gb/s cards plus 2 x A9K-2X100GE-SE 2 port 100 Gb/s cards would do it theoretically do it), but man, you're going to be paying some large dollars to get this working!

Good luck!

New Member

Re: Features of Nexus 7000 series - load distribution

Darren.q,

Thanks for the reply. There are 48-port 10 GE modules for Nexus 7000 now available, so I need 3 modules of those and 5 of the 100 GE modules, taking up 8 slots. Allowing for 2 supervisor modules, that should all fit into a 7010 chassis.

I am most interested if the operation of the switch can be configured to do the distribution operation, coming from 100 10 GE ports and keeping 10 100 GE ports full. Is that function built into the switch?

Silver

Re: Features of Nexus 7000 series - load distribution

Kevin Buchs wrote:

Darren.q,

Thanks for the reply. There are 48-port 10 GE modules for Nexus 7000 now available, so I need 3 modules of those and 5 of the 100 GE modules, taking up 8 slots. Allowing for 2 supervisor modules, that should all fit into a 7010 chassis.

I am most interested if the operation of the switch can be configured to do the distribution operation, coming from 100 10 GE ports and keeping 10 100 GE ports full. Is that function built into the switch?

Sorry mate - that link you posted is to 48 port 10/100/1000 modules, *not* 48 port 10 gig modules.

There *is* a 48 port 1/10 gig module, but it's an F2 series line card - which means, from memory, they're only layer-2 - no layer 3 functionality in them.

And there is no way you can combine 100 ports into a single ether channel, as I previously said. You could connect all the ports, yes - provided there are different devices on the other end - using VLAN's, but I don't know you could control the output - and you still can't port-channel 10 x 100 gig ports - you are limited to 8 ports in a single port-channel.

And, using F2 cards, it's *all* layer 2, and it'll only get maximum bandwidth, as far as I know, if you keep it on the one card (I.E. the single, 10 gig port card). As soon as you hit the switching fabric, you'll get a performance hit, depending on how many switching modules you have in the Nexus.

Without more detail on what you want to achieve, I can't really say much more. There's no way to "aggregate" 100 ports into a single flow on a Cisco switch, no matter what the speed - and by aggregate, I refer to combining them into a single channel group - if you mean something different, then you'll need to define it better.

Cheers.

Cisco has a new solution

Cisco has a new solution called ITD:

http://blogs.cisco.com/datacenter/itd-load-balancing-traffic-steering-clustering-using-nexus-5k6k7k

 

ITD (Intelligent Traffic Director) is a hardware based multi-Tbps Layer 4 load-balancing, traffic steering, redirection, and clustering solution on Nexus 5K/6K/7K series of switches. It supports IP-stickiness, resiliency, NAT (EFT), VIP, health monitoring, sophisticated failure handling policies, N+M redundancy, IPv4, IPv6, VRF, weighted load-balancing, bi-directional flow-coherency, and IPSLA probes including DNS. There is no service module or external appliance needed. ITD is much superior than legacy solutions like PBR, WCCP, ECMP, etc.

 

888
Views
0
Helpful
4
Replies