Thanks for the answer. Very helpful. Just out of curiosity, does this mean only that the ports on the card can't be put in "no switchport". If there were Vlan interfaces defined in the config, could it still route?
Here is the reply i got from our AM.
That card has an NTE (not to exceed) price of $35. The reason it’s so much lower than the existing 32 port card is that the new card is an L2 only card. It’s designed for connecting FCoE throughout the DC so storage traffic can traverse the Ethernet fabric through the core 7ks. You can mix existing M series cards that have L3 functionality the new L2 F series cards and still provide L3 to the F series ports.
F1-series linecards do NOT have L3 switching capabilities, the forwarding engine can only make L2 forwarding decisions.
The system won't allow you to turn a switchport on a F1-series linecard from a L2 switchport into a L3 (routed) port. (e.g. "interface ethernetX/Y; no switchport" will return an error).
While the module itself won't permit anything other than L2 switchports, if the system has any M1-series liecards installed, they can be used for making a SVI forwarding decision for any L3 on that VLAN. e.g.
interface ethernet10/1 # module 10 is F1-series linecard switchport access vlan 100 ! interface vlan100 # SVI on M1-series linecard(s) within system ip address 10.1.1.1/24
In this case, the F1-series linecard(s) can make use of any M1-series linecard(s) to provide the L3 switching.
Hi, I'm fairly new to the Cisco Nexus 7000 series switches and I was recently reading up on the new card. I am trying to under why it matters if I have any M1-series line card as you posted in your reply. Wouldn't creating an SVI as in your example below be a function of the supervisor?
Example: Simple scenario, what if I had a Nexus 7000 chassis with 1 x Sup (N7K-Sup1) in the chassis and 1 x N7K-F132XP-15. The problem is I may have 10 servers all plugged into the same line card but all on different Vlans. The issue is all servers need to communicate with each other which would mean that I need to create SVI interface.
To extend this scenario there might be requirement to route to an external device like a router. Although not pretty could I not create an SVI interface on the Nexus 7000 (call it Vlan 101 - IP address 10.2.1.1/24) and plug the router into a dedicated 10Gbps interface configured for Vlan 101?
I ran into a similar problem on a WIC line card for an ISR router in the past where it did not support the routed interfaces. I was however able to make an SVI interface quite easily to workaround the issue.
I don't see why the M1 line card would be needed in the scenario above if the SVI interface can be created on the Supervisor.
interface vlan100 # SVI on M1-series linecard(s) within system ip address 10.1.1.1/24
The Nexus 7K architecture is different from the Catalyst 6500 - in Nexus 7K system the supervisor is never part of the data plane operation. Each M1-series card has his own integrated Forwarding Engine. This means that when you create an SVI interface, it is in fact not operating on the supervisor, but on the Forwarding Engine of the card. With the introduction of F1-series card (without L3 services) you need at least one M1-series card in the system to be able to create and use SVI interface.
So in the case of (1) M-series and F-series combined in a single chassis, what happens if the M-series module fails? Do all SVI's go down? If so, it sounds like you'd have to plan on multiple M-series for L3 redundancy.
What happens if just the Sup goes down? Does traffic keep forwarding since the Sup isn't part of the data plane?
Found out that yes, if your single M-series module fails, you will lose L3 and your SVI's will go down. So if you want L3 redundancy, make sure you run at least 2 M-series blades.
Also if the Sup does go down, packets/existing flows keep forwarding for a little while. But at some point things will start to fail as modules want to query the Sup.
Thanks for the info. I've been mulling over the "F1 do not support L3 routing decisions but dont worry about it if you have M1 Line Cards." Does this mean that some sort of resource will be used on the M series line card by the F1 cards? If so what resources - a FIB, CEF, bandwidth? Will I need to keep x number of ports free on the M card for y number of ports on the F1 card? I feel like I may be missing something here.... If you have time to review I'd really appreciate it.
From what I understand, the F-series module will form a backplane port-channel to one or more M-series line cards and use up to 40/80Gbps per linecard (depending on if you have the 1G or 10G LC).
The keyword to search for is Cisco Nexus Mixed Mode or Mixed Chassis.
To display the number of packets sent by the F Series modules to each M Series module for proxy forwarding:
show hardware proxy layer-3 counters
For more info, see:
thiland has already answered to the main questions. Yes, when a traffic - received by F1 interface - has to be routed the system use resources of M1 cards as "proxy forwarder". This resource is a portion of the forwarding engine capacity on an M1 linecard allocated to:
This resource will be shared between the local M1 physical interface (s) and the traffic to be routed on behalf of F1 linecards - and as such there will be a contention for this resource. You can control which physical port(s) on M1 modules you want to exclude from this proxy forwarding (with the "hardware proxy layer-3 forwarding" CLI command). By default all available resources on M1 cards will be used for proxy forwarding and the source F1 interface can load-share incoming traffic between these resources using internal logical connections (some kind of "portchannels"). The best view is provided by the "sh hardware proxy layer-3 detail" command.
You should consider these behavior when you develop your Data Center design. Usually Cisco recommends to use M1 interfaces in L3 mode for uplinks / interlinks and F1 interfaces for connecting L2 access switches. If you have mainly L2 traffic with partial L3 routing need, than a mixed M1/F1 chassis can provide you a good and cost-effective solution. If majority of your traffic must be routed between VLANs than you should consider using M1 interfaces towards the access as well.
Of course if you want to use specific technologies like FCoE or FabricPath you have no choice - these are supported only on F1 interfaces.
Please note that if you use VDCs than you must consider each VDC as isolated entity - am M1 line card in a given VDC is not able to provide proxy forwarder services to F1 interfaces allocated to different VDCs.
Any N7K-M series will do?
I am planning to use N7K-M148GT-11 and N7K-F132XP-15.
Also when the M series port acts as a proxy router to F series, does this mean that port will not be usable anymore?
No, L3 proxy means that L3 forwarding capacility will be shared between F series and M series. You can potentially over subscribed the L3 forwarding ASIC (the N7K-M148GT-11 is only 40G total).