This discussion is locked

ASK THE EXPERT - CISCO NEXUS 5000 AND 2000 SERIES

Unanswered Question
Oct 7th, 2010

Welcome  to the Cisco Networking   Professionals Ask the Expert conversation.  This is an opportunity to get information on Data Center Switching the Cisco Nexus 5000 Series Switches and Nexus 2000 Series Fabric Extenders with Lucien Avramov.  Lucien is a Customer Support Engineer at the Cisco Technical Assistance Center. He currently works in the data center switching team supporting customers on the Cisco Nexus 5000 and 2000. He was previously a technical leader within the network management team. Lucien holds a bachelor's degree in general engineering and a master's degree in computer science from Ecole des Mines d'Ales. He also holds the following certifications: CCIE #19945 in Routing and Switching, CCDP, DCNIS, and VCP #66183.

Remember to use the rating system to let Lucien know if you have received an adequate response.

Lucien might not be able to answer each question due to the volume expected    during this event. Our moderators will post many of the  unanswered   questions in other discussion forums shortly after the  event. This   event  lasts through October 22, 2010. Visit this forum  often to view responses to your questions and the questions of other  community  members.

I have this problem too.
0 votes
  • 1
  • 2
  • 3
  • 4
  • 5
Average Rating: 5 (1 ratings)
GIULIO FAINI Fri, 10/08/2010 - 02:59

I have 5 questions I hope you can enlighten me about:

  1. If I do multi-layer vPCs  (or maybe its called back-to-back vPCs), does the vPC IDs that you associate to the interfaces must be the same on all facing-swicthes or they just have local important to the switch?
  2. LACP between NX-OS and a 6708 card of a 6500 running 12.2(33)SXI3 is not working (I tested it and channel-group works only as 'on' on both sides and not 'active' as it is reccomended on CCO). So, basically, how do you run LACP between NX-OS and IOS 6500?
  3. if both vPC peers are DOWN becuase of a network failure and then they all restart again, how long does it take vPC to converge and relay traffic? i think we need to configure the 'reload restore' under "vpc domain" config. Is that all or more commands are needed? Why this command is not enabled by default?
  4. If i connect a L3 device (like a router) to both vPC peers, best practices say that you need to use L3 links to both vPC peers. Can I run the L-3 escape-link (in case one of the 2 L3 links are down) over the vPC peer link? Where does it say it on CCO? I just need to add the VLAN where i want the escape-link traffic pass to.
  5. Over the vPC peer link, should i run bridge assurance, udld, loopguard, mtu 9216? what is the best practice reccomendation for configuring a vPC peer link?

Thanks,

Giulio.

Lucien Avramov Sat, 10/09/2010 - 11:53

1. The vPC domain id needs to be local to the pair of Nexus switches you are pairing together. For example if you have a pair of N7K and a pair of N5K, your 5K needs to be both in the same domain id number, and the pair of 7K in their own domain id number.

2. We need to troubleshoot here lacp to find out where the problem is. It should work with Catalyst 6500 or any other switch LACP capable, so we need to look further at the specific code you are running here to figure out the LACP issue.

3. I'm not sure about the reload restore command you are refering too. Are you sure this is on a Nexus 5000? If so, let me know what code you are running.

4. Configure vpc on the two physical connections from your 5ks to the router. Then for that one vPC, use switchport mode access, and place both in the same vlan. You won't need an SVI with an IP address for that one vlan except it you want to run a ping test for example. This vlan used for the router, will have of course to be allowed in the vlan list on the peer link. In general all the vlans you have for vPC need to be allowed on the vpc peer link.

5. Over the peer-link, run spanning tree port type network, this will enable bridge assurance. As far as the MTU, you don't need to make any specific change: if your mtu is globally enabled as 9216, then it will on the peer-link as well.

GIULIO FAINI Mon, 10/11/2010 - 03:42

1.- But can you confirm that in a multi-layer vPC the vPC domain IDs must be unique on the 2 different vPC domains? If 2 N7K have "vpc domain 1" and they are connected to 2 N5K, the 2 N5K cannot have as well "vPC domain 1", right?

2.- I opened a case.

3.-  Yes its in the N7K running 5.03. Anyways, In the vPC protocol (I guess its the same with N5K), what happens if both peers are down and we restart the 2 switches?? Please see:

http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/interfaces/configuration/guide/if_vPC.html#wp1643030

Why is not enabled by default both on the N7K and N5K.

4.- yes we use 2 VLANS with SVI (can we do without SVI??) to connect the 2 Nexus to the router... i am not clear the config. if I have for example:

Nexus1----1/29 ----------------------------------

1/31 ||1/32                                         | 1/2

        ||                                           Router

1/31 ||1/32                                         | 1/1

Nexus2----1/29----------------------------------

Can you tell me the config you would use in Nexus1, Nexus2 interfaces 1/31-32 and 1/29 and VLANs?

5.- Do you reccomend to use UDLD on the peer link?

Lucien Avramov Mon, 10/11/2010 - 10:04

1. You can use the same vpc domain for different nexus pairs. It will just confuse you more than anything else, but it will work. So I don't recommend you on such configuration.

3. This feature is not yet on N5K, but will be in the 5.0 release coming out soon. As far as why it's not enabled by default, I don't know. I will enquire.

4. Why 2 vlans here? That makes you use 2 networks on your router.

You could have a layer 3 port-channel on the router and bundle both router interfaces for the same network. Then on both nexus switches, you can have an etherchannel with an access vlan.

5. No, lacp is sufficient on the peer link for failure detection. UDLD would be useful if you are using no lacp over the peer-link. Overral it's a better solution to use lacp.

GIULIO FAINI Tue, 10/12/2010 - 06:14

Thanks for your valuable answers.

About point 4), its very interesting what you say but I am not sure that it will work using a L-3 etherchannel.

If you see the 2 animation slides, I am fraid it may create a Black-Hole of traffic when u use a Etherchannel with a L3 device.

What do you think ?

Lucien Avramov Tue, 10/12/2010 - 09:59

I dont see this as an issue, your document seems out dated. There is now the peer-gateway feature on 7K to prevent this from happening.

The vPC peer-gateway capability allows a vPC switch to act as the active  gateway for packets that are addressed to the router MAC address of the  vPC peer. This feature enables local forwarding of such packets without  the need to cross the vPC peer-link. In this scenario, the feature  optimizes use of the peer-link and avoids potential traffic loss.

aamercado Mon, 10/11/2010 - 00:12

2 questions:

1. With n5k active-active topology under "system qos" - anyway to setup qos without it restarting the 2148 fex? I had qos setup and then change it back to defaullt base on cco instrutions which restarted all the fex associated with this n5k pair but cco doc didn't mention it would do this. Got the global vpc inconsistency log message when I change qos back to default

2. N5k on 4.2(1)N1(1) and N7k on 5.0(2) [build 5.0(0.66)] has packet loss (see SR 615676537)


A pair of N5k on active-active mode with Vlan 211 and 207 up to the N7k redundant cores. I checked the following for N7k Core-1:

ospf DR master

hsrp primary

vpc primary

stp root


Below is an example of a vlan config:


interface Vlan207
  no shutdown
  no ip redirects
  ip address 10.100.207.2/24
  ip ospf passive-interface
  ip router ospf 1 area 0.0.0.200
  ip pim sparse-mode
  ip igmp version 2
  hsrp 207
    preempt delay minimum 180
    priority 90
    timers  1  3
    ip 10.100.207.1
  ip dhcp relay address 10.100.211.71


Both N7k and N5k has the same config below:


interface port-channel13
  description TO-N5K-DC-EDGE3and4 PORTS 3/9 and 3/10****
  switchport
  switchport mode trunk
  vpc 13
  switchport trunk allowed vlan 203-204,207-208,211,223-224
  logging event port link-status
  logging event port trunk-status


On the N5k, the switchport is basic:


interface Ethernet141/1/11
  switchport access vlan 207
  spanning-tree port type edge



When server (2003 SP3 or 2008) on vlan 211 copies files from 207, it takes a long time. Wireshark trace shows dropped packets meaning I get a lot of "tcp previous segment loss" For example, a 1.5G files which normally take less than a minute, takes 15 minute to copy.

I tried different servers and it doesn't appear to be server related but network. I tried to turn off "checksum offload" and "large send offload" on the NIC but still a problem...there is no nic-teaming - just one nic on each server


From 10.100.211.X server, I tried a "ping 10.100.207.X -t -l 1514" and there was only a few packet drops  as oppose to the ethereal trace so I am not sure if I am actually getting packet loss from the network or if it is still a server issues. I tried copy from servers on diff OS (ie 2003 versus 2008) and same problem


I also looked at perfmon during a file transfer and it looks fine.


On non-active/actvie setups, like single switch user-IDF (ie 4500, 3750., 6500) or a single N5k, which has a vpc up to the N7k cores, file transfers are fine...so it just seems to be related to my redundant N5k (active-active) topology. I tried with turning Jumbo on and off as well as setting up QoS on and off but no work. Currently Jumbo and QoS is set back to default. I also tried enabling "peer-gateway" on the N7k VPC cores but same problem.


I also tried file transfer btwn n5k pairs meaning N5k(active-active) pair #1 to/from N5k(active-active) pair #2 and slow file transfers....so issue seems to related only to n5k active-active topology:

sh platform and sh hardware  shows negligible discards/drops. Although not sure on the mtu and crc stomps???

Gatos 0 interrupt statistics:
Interrupt name                                 |Count   |ThresRch|ThresCnt|Ivls
-----------------------------------------------+--------+--------+--------+----
gat_fw2_INT_ig_pkt_err_eth_crc_stomp           |a5a3    |0       |3       |0
gat_mm2_INT_rlp_rx_pkt_crc_stomped             |a5a3    |0       |3       |0
Done.

Gatos 1 interrupt statistics:
Interrupt name                                 |Count   |ThresRch|ThresCnt|Ivls
-----------------------------------------------+--------+--------+--------+----
gat_fw0_INT_eg_pkt_err_eth_crc_stomp           |14bd    |0       |1       |0
gat_fw1_INT_eg_pkt_err_eth_crc_stomp           |3a04    |0       |4       |0
gat_fw3_INT_eg_pkt_err_eth_crc_stomp           |6759    |0       |1       |0


Any ideas

Lucien Avramov Mon, 10/11/2010 - 10:34

1. Can you be more specific and show me what configuration you applied?

2. You have the exact same configuration for 141/1/1 on both N5Ks? Do you mean the transfer is also slow, when you connect the server directly to the N5K? Is it a dual homed server or single homed (this is key)? Where is the vlan 211 located, is it on the same N5K pair? Are you using enough peer-links between the two 5ks? How many physical 10 GE links do you have between them? What protocol is used for the file transfer? CIFS?

aamercado Mon, 10/11/2010 - 12:39

1. Spoke to TAC, apparently on a multi-tier VPC (ie my N7k cores are vpc and my downstream n5k dualhome are vpc), putting a qos change will create a vpc inconsistency that will take down my network until the configs are manually sync. Darn - wish that wasn't the case as I want to turn on Jumbo frames on the N7k and N5k without disrupting the network.

2a. Yes, exact same configuration for 141/1/1 on both N5Ks

2b. No, never tried connecting servers directly to N5k

2c. All windows server is single-homed while our non-window servers (ie Isilon) are static 2-port-channel (non-LACP). When I transfer from single to single, same problem or single to dual-home servers. Whether single or dual-home doesn't matter as when I do the same file transfer from a non-pair N5k (ie single N5k, 45XX, 65XX, 3750 which has a VPC to N7k Core), transfer is fast...so only seems isolated to servers hanging off N5k pair.

2d. Yes, vlan 211 and 207 located are on the same N5K pair

2e. Yes, 20Gig btwn the N5k with 7 qty 2148Fex on the N5k pair.

2f. Yes, CIFS for file transfer  - ethereal traces show SMB setting up the connection and file transfer

Lucien Avramov Tue, 10/12/2010 - 10:04

1. The config-sync feature coming in the next code release of NX-OS for 5K will allow you to make those changes more smoothly accross your N5K pair.

2. In your design for the file transfer , is it 1GB server ---- Nexus ---- 1 GB server ?

Let's look at the counters for drops / discards and at the queuing:

Assuming you have a 2148, look at the following outputs:


N5K#attach fex 100

Fex-100#show platform software redwood sts -> look at the asic and HI port concerned

Then show platform software redwood drops

Look if this increments overtime

aamercado Tue, 10/12/2010 - 13:15

The FEX upstream to the N5k has no increments but the host ports do slowly increment on:

red_hix_cnt_tx_lb_drop

Lucien Avramov Tue, 10/12/2010 - 13:39

Increment of red_hix_cnt_tx_lb_drop is not an issue.

This counts for frames received and not send out from the interface they  were received, its normal to see an increment there.

TAC would need to troubleshoot this further. From what you are saying  the FEX is not dropping traffic.
aamercado Tue, 10/12/2010 - 16:04

Any chance you have a method to confirm this on the N5k?

or even on the N7k?

Lucien Avramov Wed, 10/13/2010 - 09:32

A couple of other things for the 5K:

-Do you see CRC or other input errors to increment on the interfaces? (show interface)

-On the 5k, you can identify drops with show platform fwm info pif ethernet 1/1

-Also look at the queuing, show queuing interface e1/1 and see if those increment

areanetsenato Mon, 10/11/2010 - 01:48

Hi,

i want some information about CFS (Cisco Fabric Services).

We're installing 4 Nexus 5000 and 12 N2K-C2248TP and configuration synchronization could be a problem.

We're using version 4.2(1)N1(1) but documentation if unclear for me. Have you any examples or supplemental docs ?!!?

Thank

Marco

Lucien Avramov Mon, 10/11/2010 - 10:37

Very good point. CFS is used on N5K for the vpc consistency check at this point. You can see it if you run show vpc consistency-parameters global.

As of today, with 4.2 code, your configuration needs to be the same on your N5K or N2K that are vpc'ed. From our next release coming out very soon, there will be a feature called config-sync that will actually take benefit of CFS for pushing configurations between both N5K that form a vpc pair.

parkerd Tue, 10/12/2010 - 07:19

Hello,

Utilizing a Nexus5010 4.1(3)N2(1a). It is hosting both 10Gb and FC.

Mod Ports  Module-Type                      Model                  Status
--- -----  -------------------------------- ---------------------- ------------
1    20     20x10GE/Supervisor               N5K-C5010P-BF-SUP      active *
2    8      8x1/2/4G FC Module               N5K-M1008              ok

On the few ports that specifically service only FCoE, can we allocate 100% bandwidth to fcoe?

Default appears to be 50% but that seems like a waste on these specific ports.

Is it possible to do per-port fcoe queuing? Any issues with putting fcoe at 100%?

Regards.

Lucien Avramov Tue, 10/12/2010 - 10:22

You can certainly change that, but to impact traffic ingress / egress it needs to be a queuing QoS and will be a global config for all the interfaces.

Look at this document:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/qos/421_n2_1/Cisco_Nexus_5000_Series_NX-OS_Quality_of_Service_Configuration_Guide_Rel_421_N2_1_chapter3.html#task_1135199

You could configure, but this would be for all your traffic and I wouldnt recommend you.

class  type  queuing  class-fcoe

bandwidth  percent  100

class  type  queuing  class-default

bandwidth  percent  0

You can check your queuing configuration with show queuing interface e1/1 for example.

Hausruc-TH Tue, 10/12/2010 - 23:38

Hi,

I've got a simple question:

Is there a command to configure the logging server source-interface?

I need to configure a VLAN Interface as the source-interface for logging, no VRF!

Just like "snmp-server source-interface trap VlanXXX" but for "logging server X.X.X.X ..."

Nexus 5010, version 4.2(1)N2(1)

Thanks,

Thomas

Ozden Karakok Wed, 10/13/2010 - 03:46

Hi Thomas,

On NX-OS 4.1(3)N2(1) and later you can use an SVI on N5K. Actually you are already using it for the "snmp-server source interface".

feature interface-vlan

interface vlan A
ip address/mask

{vlan A needs to be trunked upstream to your L3 device}

vrf context default
ip route {destination networks or host of your syslog server} {vlan A gateway}

log server x.y.z.w (log level) use-vrf default

Thanks

Hausruc-TH Wed, 10/13/2010 - 04:15

Hi okarakok,

thanks for your reply, I'll try your solution.

So at this time there's no solution for defining a logging server interface-source completely _without_ vrf?

Like with "snmp-server source-interface trap VlanXXX" or "ip tacacs source-interface VlanXXX" where no vrf is used.

Maybe it's possible in futere NX-OS releases?

Regards,

Thomas

Ozden Karakok Wed, 10/13/2010 - 04:27

Hi Thomas,

I am afraid there is no other solution in the existing NX-OS releases.

I believe that might be possible in NX-OS 5.0(3) but it is not committed yet.

Thanks,

Ozden

Lucien Avramov Wed, 10/13/2010 - 07:12

Under 4.2 code, you can specify logging server x.x.x.x without the need to specify the vrf at the end. That will use the management vrf. Else, as per the configuration guide you need to specify the vrf (it will be the ''default'' vrf since there are only 2 VRFs on the 5000).

Selecting the vrf or the interface should be the same result for you, provided you have proper routing under that vrf, it will go via the interface of your choice: show ip route x.x.x.x vrf default, show ip route x.x.x.x vrf management.

glquinton Wed, 10/13/2010 - 12:21
/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

I have a question about the 5000 series and a windows NLB cluster (load balancing cluster). The issue i am running into is that the windows NLB cluster uses a multicast mac address with a non-multicast IP address. With any other cisco device in my current environment we can work around this issue by adding a static arp and mac entry on our core switches (6513's). However when i try to add a static mac entry on the nexus boxes i receive the following error

"ERROR: mac address is an ip multicast mac"

I am wondering if there is a work around, or if no workaround will this be addressed in any upcoming IOS releases?

Thanks in advance for the help!

Lucien Avramov Wed, 10/13/2010 - 23:45

I'm glad you asked.

There is an enhancement request, that will resolve this query:

CSCtd22110 - Need support for static multicast MAC entries on Nexus 5000.
tiwang Thu, 10/14/2010 - 01:00

Hi out there

We have implemented a VMWare environment around a Dell Blade Center and 2 set of redundant Nexus switches - a set of 2 NX 5010 for access to iSCSI Storage (not FCOE) and a set of 2 NX 5020 with a dual set of NX 2148 in our datacenter.

We have ran into a few challenges here - first a simple where I discovered that I cannot enable jumboframes on the interfaces on the 5010 handling the iSCSI but this has to be done as a policy-map - the published way around define a QOS policy - but first - is this necessary at all? As far as I can see jumboframes are transmitted on the ChannelPort between the 5010 boxes without any need to modify the default policy map.

Here is output from one of the NX5020 where I haven't defined the MTU in the default qos policy map - as far as I can see all ports running in trunk mode will process jumboframes ? Buf if I ping with DF set and a packet of 2K the packets are discarded?

SW5020-01# sh int | inclu jumb
27215058 jumbo packets  0 storm suppression packets
27985468 jumbo packets
2558685 jumbo packets  0 storm suppression packets
15912 jumbo packets
0 jumbo packets  0 storm suppression packets
0 jumbo packets
    ..

0 jumbo packets
0 jumbo packets  0 storm suppression packets
15337 jumbo packets
2027659 jumbo packets  0 storm suppression packets
7744804 jumbo packets
11939986 jumbo packets  0 storm suppression packets
17561369 jumbo packets

So - how can I verify if the policy map I have defined is up and run? And is it necessary at all since the box in fact is processing jumboframes?

Second - we are running Microsoft wlbs on windows 2003 servers in unicast mode. The Dell servers are equipped with 10GB pass-through modules  and we have implemented NX 1000V distributed switch in the VMWare 4.1 servers. The Nexus 1000 VSM is reporting a lot of dropped packets - but I cannot find any definition on what is causing dropped packets on the 1000V - I expect it is related to the wlbs in unicast mode.

Here is some output from the 1000V - a port-profile which is assigned to the NLB interface on the VM:

Nexus1000V# sh int veth23
Vethernet23 is up
    Port description is ts-12a1, Network Adapter 1
Hardware is Virtual, address is 02bf.ac15.9513
    Owner is VM "ts-12a1", adapter is Network Adapter 1
Active on module 12
VMware DVS port 1089
Port-Profile is ASP_NLB
Port mode is access
5 minute input rate 1466 bytes/second, 0 packets/second
5 minute output rate 1383193 bytes/second, 10049 packets/second
Rx
77329 Input Packets 224 Unicast Packets
0 Multicast Packets 77105 Broadcast Packets
104532694 Bytes
Tx
246967099 Output Packets 239170089 Unicast Packets
151048 Multicast Packets 7645962 Broadcast Packets 87406048 Flood Packets
41291846309 Bytes
0 Input Packet Drops 7251473 Output Packet Drops

Nexus1000V#

How can I identify these dropped packets? - and any suggestion to run wlbs in unicast mode in the nx 1000V environment?

best regards /ti

Lucien Avramov Thu, 10/14/2010 - 10:39

Regarding Jumbo MTU, they are not enabled by default on the 5000.

I explain here steps by steps how to configure and check your jumbo mtu configuration:

https://supportforums.cisco.com/videos/1215

Your question is interesting, if you have not configured jumbo and you see the counters increment, that is due to the cut-trough hardware architecture of the nexus 5000 switching. The frames are fragmented on the egress and very often times, since they are hardware switches, the frame is a little over the 1500 mark, so the counter does increment with a jumbo packet: for example 1530,1540. This is not enough to conclude your switch is actually processing jumbo, you need to look at the show queuing interface as showed on the video.

For Nexus 1000v, I would suggest that you span the traffic and look at a packet capture to find out if there are retransmissions etc.

dwilliams1979 Thu, 10/14/2010 - 16:43

When our sales engineers were here pitching the UCS and Nexus we were told that by enabling vPC on the Nexus we lose 50% of the available vlans. When we are only talking about 500 vlans that is pretty significant.  What is confusing us is that we cannot find anything in any Cisco documents to indicate that this is valid.  Perhaps we misunderstood, perhaps the sales engineer misspoke.  I was hoping you could shed some light on this.

Thanks,

Dave

Lucien Avramov Fri, 10/15/2010 - 10:49

I'm not aware of such limitation.

For example on Nexus 5000, 2248, 2232, you can have up to 507 vlans. The number should not decrease by two with the vPC, it should be unrelated.

mojagot1972 Fri, 10/15/2010 - 14:47

Hi

Just installed N5K and N2232 and can't get the FEX working. Followed best practice configuratios and the FEX is stuck in an image download status. Please see output below.  I think is software version related ... Any ideas?

Many thanks,

Mohammed

SFD1_NCP1_BMENT# sh fex
  FEX            FEX             FEX                  FEX              
Number       Description         State      Model            Serial    
------------------------------------------------------------------------
100    N2232-V-100  Image Download    N2K-C2232PP-10GE   JAF1427DERR
SFD1_NCP1_BMENT# sh fex detail
FEX: 100 Description: N2232-V-100   state: Image Download
  FEX version: 4.2(1)N1(1) [Switch version: 4.1(3)N2(1)]
  FEX Interim version: 4.2(1)N1(0.002)
  Switch Interim version: 4.1(3)N2(1)
  Module Sw Gen: 12594  [Switch Sw Gen: 21]
pinning-mode: static    Max-links: 1
  Fabric port for control traffic: Eth1/2
  Fabric interface state:
    Po100 - Interface Up. State: Active
    Eth1/1 - Interface Up. State: Active
    Eth1/2 - Interface Up. State: Active
  Fex Port        State  Fabric Port  Primary Fabric
Logs:
[10/15/2010 23:21:59.936665] Module register received
[10/15/2010 23:21:59.958267] Registration response sent
[10/15/2010 23:21:59.965579] Requesting satellite to download image
[10/15/2010 23:22:20.299614] Module register received
[10/15/2010 23:22:20.321189] Registration response sent
[10/15/2010 23:22:20.328549] Requesting satellite to download image
[10/15/2010 23:22:58.669270] Module disconnected
[10/15/2010 23:22:58.683093] Module Offline
[10/15/2010 23:23:07.729538] Module register received
[10/15/2010 23:23:07.751760] Registration response sent
[10/15/2010 23:23:07.759959] Requesting satellite to download image
[10/15/2010 23:23:11.823229] Module register received
[10/15/2010 23:23:11.843855] Registration response sent
[10/15/2010 23:23:11.852193] Requesting satellite to download image
[10/15/2010 23:23:17.837929] Module register received
[10/15/2010 23:23:17.859348] Registration response sent
[10/15/2010 23:23:17.866676] Requesting satellite to download image
[10/15/2010 23:23:25.955376] Module register received
[10/15/2010 23:23:25.976779] Registration response sent
[10/15/2010 23:23:25.984456] Requesting satellite to download image
[10/15/2010 23:23:36.192917] Module register received
[10/15/2010 23:23:36.215886] Registration response sent
[10/15/2010 23:23:36.223185] Requesting satellite to download image
[10/15/2010 23:23:48.311157] Module register received
[10/15/2010 23:23:48.331886] Registration response sent
[10/15/2010 23:23:48.341066] Requesting satellite to download image
[10/15/2010 23:24:02.471718] Module register received
[10/15/2010 23:24:02.493376] Registration response sent
[10/15/2010 23:24:02.500763] Requesting satellite to download image
[10/15/2010 23:24:18.750502] Module register received
[10/15/2010 23:24:18.771231] Registration response sent
[10/15/2010 23:24:18.781395] Requesting satellite to download image
[10/15/2010 23:24:37.60267] Module register received
[10/15/2010 23:24:37.80980] Registration response sent
[10/15/2010 23:24:37.98807] Requesting satellite to download image
[10/15/2010 23:24:57.431279] Module register received
[10/15/2010 23:24:57.452779] Registration response sent
[10/15/2010 23:24:57.461660] Requesting satellite to download image
[10/15/2010 23:25:36.169274] Module disconnected
[10/15/2010 23:25:36.184155] Module Offline
[10/15/2010 23:25:42.203487] Module register received
[10/15/2010 23:25:42.226351] Registration response sent
[10/15/2010 23:25:42.233920] Requesting satellite to download image
[10/15/2010 23:25:46.382691] Module register received
[10/15/2010 23:25:46.406076] Registration response sent
[10/15/2010 23:25:46.413464] Requesting satellite to download image
[10/15/2010 23:25:52.439605] Module register received
[10/15/2010 23:25:52.460047] Registration response sent
[10/15/2010 23:25:52.467426] Requesting satellite to download image
[10/15/2010 23:26:00.646692] Module register received
[10/15/2010 23:26:00.678842] Registration response sent
[10/15/2010 23:26:00.686522] Requesting satellite to download image
[10/15/2010 23:26:10.691728] Module register received
[10/15/2010 23:26:10.712499] Registration response sent
[10/15/2010 23:26:10.721325] Requesting satellite to download image
[10/15/2010 23:26:23.134226] Module register received
[10/15/2010 23:26:23.157134] Registration response sent
[10/15/2010 23:26:23.164738] Requesting satellite to download image
[10/15/2010 23:26:37.327115] Module register received
[10/15/2010 23:26:37.348568] Registration response sent
[10/15/2010 23:26:37.356147] Requesting satellite to download image
[10/15/2010 23:26:53.473585] Module register received
[10/15/2010 23:26:53.495269] Registration response sent
[10/15/2010 23:26:53.502851] Requesting satellite to download image

Lucien Avramov Fri, 10/15/2010 - 15:56

The problem you are facing is that the 2200 need the 4.2 or later code on N5K.

Upgrade your 5K from 4.1.3 to 4.2 and you will be all set.

dwilliams1979 Tue, 10/19/2010 - 20:01

So it looks like there is some new code out for the 5K (n5000-uk9.5.0.2.N1.1.bin) but I cannot find any release notes for it anywhere.  Can you offer any insight into this code or direction towards release notes for it?

tenaro.gusatu.novici Tue, 10/19/2010 - 22:35

Hi Lucien,

if it is not out-of-the-scope, I would like to get some feedback regarding the best testing strategy when using storage on one side and UCS on another (with Nexus 5k in the middle). If my main goal is the highest reading (or writing) throughput should I use pure FC or FCoE or maybe Gigabit Ethernet? How can I measure this?

Regards,

Tenaro

Lucien Avramov Tue, 10/19/2010 - 23:55

I'm not quite sure what you want to measure. I would suggest you to use an IXIA type traffic generator device if you like to do performance testing.

Also you can simply generate a large dump file and test the ethernet speed.

Keep in mind that you can influence fcoe traffic / ethernet traffic with the modular QoS configuration on the N5K (bandwitdh alocation).

mojagot1972 Wed, 10/20/2010 - 22:40

Hi

I have been asked by a customer to enable multicast NLB for his VMWare servers. Our network design is a Cisco 6500 VSS at the aggregation layer followed by Nexus 5010 with Nexus 2232 FEX. The VMWare environment hosts the Nexus 1000V.

In the previous datacentre all servers connected to standalone pairs of Cisco 6500 switches. All we did was add the arp static entry and the static mac-address entry on the switch.

The Aggregation layer in the new design is the only Layer 3 routing point, everything else below it is layer 2 switching. With this mind I was going to put a static arp entry on the Aggregation layer.

Would this work?

Many thanks,

Mohammed

Lucien Avramov Fri, 10/22/2010 - 10:33

I don't see an issue with that from this brief description

areanetsenato Thu, 10/21/2010 - 04:51

Hi Lucien

thank for your precious info...

Another question for you...

We have implemented a VMWare environment around two IBM Xseries Servers connected to a set of two N2K-C2248TP which are extention of redundant Nexus switches - a set of two N5K-C5010P.

We're planning to upgrade from version 4.2(1)N1(1) to Version 5.0(2)N1(1).

I want upgrade software on a single N5K-C5010P while the second N5K-C5010P works correctly with the couple of N2K-C2248TP.

Then I want upgrade the second N5K-C5010P.

What is the correct procedure to do ?

If my servers are connected to both N2K-C2248TP can I suppose that connection from servers and N2K-C2248TP works fine ?!?!

Thank

Actions

Login or Register to take actions

This Discussion

Posted October 7, 2010 at 1:40 PM
Stats:
Replies:41 Avg. Rating:5
Views:22629 Votes:0
Shares:0
Categories: General UCS Hardware
+

Related Content

Discussions Leaderboard