FCoE and FC

Unanswered Question
Dec 18th, 2008
User Badges:

Hi all


Perhaps the community could settle an argument that is stopping everyone here from doing any real work.


Like a lot of shops we're tossing around FCOE and trying to weigh up the pros and cons. This has moved up the agenda as we're now about to fit out a new data centre and want something that's good for 5-7 years.


The stumbling block we've hit is should we follow what seems to be the trend and go with a pair of Nexus 5000 in the top of each rack for the 'first hop' or wait for FCOE blades to come out for the 9513?


The 5000 plan gets everything in and cabled up from day one and we're good if FCOE takes off in the next 12 months. The wait-for-MDS plan is attractive to the bean counters but could leave us unable to react to server connectivity demands in the short term.


What's the opinion round the virtual water cooler?

  • 1
  • 2
  • 3
  • 4
  • 5
Overall Rating: 0 (0 ratings)
Loading.
inch Thu, 12/18/2008 - 12:46
User Badges:
  • Bronze, 100 points or more

G'day,


If it were me planning a new data centre I would focus around 10gbe at the top of the rack.


With this in mind, you have two choices, nexus 5k or a cat4900.


If you want fcoe - its nexus 5k in rack with nexus 7k as a distribution/aggregation/core :)





storagemitch Thu, 12/18/2008 - 13:01
User Badges:

Yeah, I do like the sound of Nexus all the way but where do we plug in the DMX4s and I don't think Nexus flies as a distribution layer without cheaper 10GbE NICs and wholesale support/ratification of FCoE.


By connecting the N5K to an MDS with the FC module we've not really consolidated anything, if anything it's gone from 2 networks (Cat and MDS) to 3 (Cat, MDS and Nexus). Following the KISS argument, I'd like to see an FC blade for Nexus (not going to happen according to sources) or an FCoE blade for MDS (jury still out I believe).


It's like the old days of always hanging on a month for new PC that's faster, cheaper...

stephen2615 Fri, 12/19/2008 - 01:27
User Badges:
  • Bronze, 100 points or more

DMX 4's. You have soo much money it doesn't matter.. :)


My major stumbling block is hundreds of Enterprise Storage ports all at 4 Gpbs. 95% of our SAN connected servers are HP C class blades and they are fairly hefty systems and relatively new.


I have seen no mention at all of CNA's for any of our current infrastructure. I also have core/edge and NPV/NPIV solutions and so far, nothing Nexus wise is worthwhile. But, give it a couple of years and that might change.


I am sure my infrastructure is similar to most organisations. Blades running virtual servers seems to be rampant in the data centre. Thats really what has bemused me with Brocade. They created their own 8 Gbps HBA's. Has anyone bought one? Does anyone use chassis servers anymore?


I just spent huge dollars replacing our aging FC switches because accountants say so. I also have huge dollars invested in native fibre channel DR solutions across data centres. What can the FCoE do for me right now?


The really big question is: Do I want to hand over my nice neat well designed SAN to our network admin?


Stephen

inch Fri, 12/19/2008 - 15:23
User Badges:
  • Bronze, 100 points or more

Howdy,


I guess thats the problem with trying to crystal ball what is going to happen with fcoe and within what time frame.


Chat to your EMC folks, it wont be long until there is a 10gbe host adapter for the DMX.


10gbe nic's are getting cheaper by the day and it wont be too long and they will be on-board with x86 servers (already are with some sun servers).


Unfortunately it is prob just bad timing to be revamping your data centre - i'm sure if you could wait 6 months there would be some more concrete answers for you :)


Actions

This Discussion

 

 

Trending Topics: Storage Networking