I am a newbie in the SAN world so hope am asking the right questions.
We are looking into replacing our aging SAN. As part of the replacement, managemnt is thinking of placing a secondary SAN at one of our remote locations for backup & DR purposes.
My question revolves around providing connectivity between the two SANS. I have been asked to look into a dedicated point-to-point connection for the SAN replication. Not counting the actual SAN hardware, what other hardware would I need? Am I bacically building an independent network for this? Why this route instead of over our corporate WAN? Any good recommended reading for learning about the SAN world?
For remote replication, there are 2 methods of providing SAN connectivity. If you have the need for high bandwidth, DWDM or CWDM are high bandwidth solutions. These technologies use fibre channel end to end.
You might be more interested in FCIP (fiber channel over IP). Depending on the bandwidth needed to keep within the allotted time for the replication, FCIP offers all kinds of enhancements. Compression and encryption are options for FCIP. There is also write acceleration as an option, if its compatible with your replication application.
Your options are either native FC over some sort of fiber transport (dark fiber, CWDM, DWDM, SONET) or FCIP over a WAN link. Native FC is usually for synchronous replication for distances up to 200 km for large bandwidth requirements. FCIP is usually for asynchronous replication at longer distances with less bandwidth. The transport largely dictates the hardware. FCIP is generally cheaper.
If you go the FCIP route, a dedicated link is preferable to make sure replication traffic has predictable latency and bandwidth, but you can share an existing link if you have to. Basic QOS is available in this case. Also, FCIP on Cisco devices has special capabilities to help you get the most out of your bandwidth (TCP optimizations, compression, write acceleration, etc.).
A connectivity solution one of the SAN vendors has proposed is using MLPPP to create a dedicated point to point instead of going over our corporate WAN. I understand that this uses single individual T-1s and the MLPPP protocol bonds them together. Do you know what is needed from a hardware standpoint? Do I use standard 1-port / 2-port T-1 cards (WIC-1DSU-T1-V2) or do I need something different? Other pros / cons with this solution compared to over the corporate WAN?
Most of the replication applications out there like PPCR and SRDF require more bandwidth than a couple of T1s. The low end for FCIP is usually about 10megabits/second. What type of corporate WAN connection do you have, and what can you dedicate to this replication effort? The bandwidth usually comes in to play when you start looking at replication job windows. IE: I need the replication to complete in 2 hours and I have x number of MegaBytes to transfer. The connection from the MDS IP cards are Gigabit Ethernet. You will need to configure traffic shaping on the MDS to limit the max bandwidth that it will send on the GE port. If you end up with MLPPP across multiple T1s, I would assume they would load balance on a per packet basis. If that is the case, any deviation in latency between the links in the bundle would lead to 'TCP out of order' packets. This might cause some additional delay with the FC traffic.
VMware Trunk Port Group is supported from ACI version 2.1
VMM integration must be configured properly
ASA device package must be uploaded to APIC
ASAv version must be compatible with ACI and device package version
In the Previous articles of ACI Automation, we are using Postman/Newman as the Rest API tool to automate the ACI Configuration.
In this article I’m going to discuss on usin...
One of the first steps in building your ACI Fabric is to go through Fabric Discovery. While Fabric Discovery is usually a straightforward process, there are various issues that may prevent you from discovering an ACI switch. This article wil...