cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
2319
Views
5
Helpful
17
Replies

Mirror SAN over FCoE

mmacdonald70
Level 1
Level 1

I know that I'm getting a little ahead of technology but I'm working out an idea of using VMotion or VMWare HA to provide server redundancy between two datacenters. Unfortunately, one of the datacenters is offsite and connected over several L3 links.

Since my solution would require each VMWare server to access the same SAN space, I was thinking of using L2VPN extend a VLAN to the offsite data center and use FCoE to mirror the SAN space.

Since I am not much of a SAN guy, I was hoping that some of the more knowledgeable people here would let me know if this is possible.

2 Accepted Solutions

Accepted Solutions

G'day Stephen,

Its true FCoE is meant to assist with data centre consolidation which means taking two switches (ethernet and FC) and putting the two protocol over one transport (with some extra features).

If you can have a l2vpn between sites and still ensure that the network is "lossless" ie: ensure that ethernet flow control is configured on the entire network it would prob work.

The whole distance limitation is basically a latency problem - applications don't like having to wait a second for each write to be ack'd - It will exist in any technology really.

The actual FCoE drafts are quite a good read, it would be worth reading up on it :)

Cheers

Andrew

View solution in original post

Here is a GREAT presentation on FCoE by Dante Malagrino. I saw him give this presentation at EMC World 2008 and it answered a lot of my question.

http://www.cisco.com/web/learning/le21/le34/emc/2008/post/docs/Driving_Toward_Unified_Fabric.pdf

I've gotten permission to post the other presentation on FCoE from EMC and will add it as an attachment to this posting. Feedback on these is desired/welcome and will be passed along.

I hope this helps to answer questions about FCoE.

Gary

FCoE

View solution in original post

17 Replies 17

inch
Level 3
Level 3

G'day,

It all depends on what disk arrays you intend to use and what replication package.

Most disk arrays are setup in a master/slave arrangement. If you want to use the second or dr copy of the array for writes something usually has to trigger the slave array to be promoted to the primary or the relationship needs to be broken.

This is so you don't have a split brain (a term used in clustering).

What you need is an active/active array/lun replication. I'm not sure who if any can do this at the moment.

If you can tie in vmotion with some disk array replication software you are on the money!

As far as fcoe goes, you could use it but you might be better off with FC-IP (fibre channel over ip).

Cheers

Andrew

stephen2615
Level 3
Level 3

I think I read somewhere or perhaps our very knowledgable Cisco techo said that FCoE was for close systems and primarily for in the datacentre. So, if thats the case, you won't be using FCoE for any DR/replication solutions unless something changes in the future.

Also, if FCoE is not good for distances larger than internal to the datacentre, then we still will need native FC over C/DWDM and/or FCIP.

Am I right with the distance limitation?

Stephen

Stephen, you're correct.

10Gb requires

CAT6 - 55m

CAT6a - 100m

Twinax - 10m (very low power consumption)

Was told .1 Watts per port

FCoE is only meant for the data center and since FCoE runs over Ethernet without TCP/IP, it can't be routed. It's meant to take a server with 2 NICs and 2 HBA's and combine them using a CNA (Combined Network Adapter) to allow for less cables, lower power consumption, and the ability to give carve up the 10Gb pipe as you see fit (i.e. 2Gb for network traffic and 8Gb for FC per card). With dual CNA's and with the coming of 40Gb and 100Gb Ethernet you can quickly see why this is gaining a lot of attention. I have some presentation documents on this but need to see if they are under a NDA. If not, I'll post here.

Also, FCoE is still going through standards approval and expected to be finalized by 4Q08.

Gary,

I am relatively excited about FCoE because of the continuing investment we have in servers and storage. If it can deliver, it will save us heaps of money someday. However, one thing that slightly puzzles me is how load balancing will be done with Ethernet. All my SAN equipment is in two fabrics (real or virtual depending on the instance) but I am not sure how FCoE works as far as providing two paths to everything. I heard a rumour that this is a sticking point which needs to be sorted out.

I won't be racing into FCoE but I am sure we will have it in a year or two. It gives me food for thought with new infrastructure and I am waiting to see what HP and IBM does to provide a CNA for blades which we have by the thousands...

Stephen

Hi again :-)

multipathing will depend on the multipathing drivers that you have right now - FCoE is still going to be block based storage, it will have a lun a target etc.

The same questions were being raised when iSCSI was being drafted - I proved it to some nay sayers by using the Cisco IPS-8 and presented a lun via iSCSI and via FC :)

Sure, vendors yelled and screamed "its not supported" with their hands in the air like on team america (not cisco tho ;)

Once the vendors realise its going to be "get on board the bus or your out" there will be powerpath for fcoe and veritas dmp for fcoe etc etc :)

What about using fiber and the Nexus 5020. Would this help the issue? I was under the impression that FCoE could be switched anywhere that a L2 packet could go. I was thinking more along the lines of either a SPA adapter or a 10 GB NIC that supports FCoE, plugged into a Nexus 5020, which is uplinked to a 6509 VSS. I was hoping that this could be switched to an identical environment over MPLS or VPLS.

So long as the network is "lossless" you wont have a problem - If you are dead serious about it, I would suggest you chat to your Cisco SE about it - They will be able to tell what Cisco products outside of the nexus range can be used in this solution.

You might want to have a read of the draft standard to give yourself a bit of background first :-)

If you can't get onto your Cisco SE give me a yell I might be able to help out.

Cheers

G'day Stephen,

Its true FCoE is meant to assist with data centre consolidation which means taking two switches (ethernet and FC) and putting the two protocol over one transport (with some extra features).

If you can have a l2vpn between sites and still ensure that the network is "lossless" ie: ensure that ethernet flow control is configured on the entire network it would prob work.

The whole distance limitation is basically a latency problem - applications don't like having to wait a second for each write to be ack'd - It will exist in any technology really.

The actual FCoE drafts are quite a good read, it would be worth reading up on it :)

Cheers

Andrew

Thanks for all the information. I'll have to have a look at the drafts. I got some good ideas and I guess that it is something that I should look into. Latency on this link won't be a problem.

G'day,

Unless you are talking about sub millisecond latency it could be quite a problem - What isn't a problem in Networking circles is a huge problem in the storage world. :-)

If latency is an issue, wouldn't you be going with Infiband instead? I don't know very much about Infiband other than it has very low latency.

Gary

G'day,

I believe we have talking about a wide area l2vpn?

Inifiband is low latency but so is a well designed ethernet and fibre channel network when its in the same datacentre.

As soon as anything long distance comes in you speed of light issues :( ... If only light was faster than... ?? :)

Here is a quick calculation for you!

Speed of light through glass (this is rough!)

210,000KM/s

so 210KM/ms

So if your data centre is 210km's away you will be at least a 2ms round trip for a ping, what about a scsi write?

Well here goes!

Initiator says to target: we want to write (1ms)

Target says: I'm ready (1ms)

Initiator says: Here goes your data (xxx ms depending on how much data is in the sequence)

Target says: write was all good (1ms)

Depending on the type of disk replication is being used it can cause all sorts of hiccups!

I use Cisco ONS's and native FC with HDS True Copy Sync for some replication. At 32 km, I estimate that about 3 ms is added to our writes which should be cache to cache.

Thats very good but sometimes the host operating system has issues and the replication goes pear shaped quickly. Eg, queue depth issues caused by a badly written application will sometimes push way too much down the pipe in too short a time and it appears as though we have issues with the "SAN" in general. I say SAN as it is almost impossible to determine what part is causing the delay without something like Finisar Netwisdom.

Cisco seems to not talk about Inifiband now as they did a few weeks ago. Perhaps it might be dropped as not many people use it. I would be interested in seeing it work over distance as it has some excellent clustering features.

I suppose time will tell.. eh?

Stephen

At 32km the latency from prop delay is more like 32usec, not 3ms as you estimated.

A write I/O is made of a write cmd, xfer rdy, data and status. So you would incur at least an additional 150usec for 32km, a bit more depending on how many data frames (size of the write). I would not expect more that couple hundred usec, ie well under half a millisecond.

If you suspect the SAN or DWDM is slow or congested, try fcping to the remote storage port from local MDS and another fcping from local MDS to remote MDS (ff.fc.xx). Compare the two. Fcping resolves down to usec.

For FCoE the transport for replication will not change from what we are using for native FC today. It will be optical dark fiber, CWDM, DWDM or FCIP.

Getting Started

Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community: