I need to develop a design of SAN with many devices (total number of FC ports >= 55, and at least 10 additional ports are planned in a future). I can use a pair of Brocade SW4100 with 32 ports each, but I'm CCNA, so I strongly prefer Cisco devices. MDS9216A with MDS9500 24 port blades are too expensive, as well as 9500 directors, so they are not an option. MDS9140 are too old, and I do not want to implement 2 Gbps switches in a new installation.
So the single choice is new Cisco switch MDS 9124. It is cheap enough, and I can plan to install four of them, up to 96 ports in total. But it is necessary to develop an appropriate SAN topology.
There are two variants. First one is to make two separate couples of switches with 4 or 6 ports on each united into ISL group. (btw how many ports can be included into ISL group? any quantity up to 16? or multiples of 4 only?) The second variant is to cross-connect all the switches. This one is more robust, but more complex and less predictable. Which one is preferable?
The heart of SAN is Hitachi enterprise disk array with 16 ports. And Sun L500 library with many FC drives is planned for installation this year. All the servers will need to directly access all the library drives and disk array ports through SAN.
I think it is interesting that you don't want to use the Brocade switches if you already have two of them. I personally am not a fan of Brocade but it wont stop me using them. I think you need to work out why you would want to use the MDS first and foremost (other than they are great pieces of kit) and then make a decision.
If IVR is going to be a key part of your SAN design, then you are up for an Enterprise licence and dollars.
If you need to VSAN your fabric, then the MDS is the only option. If you are just going to use zones, then use the Brocade switches and get a couple more smaller switches such as the 9124 and use interop modes.
As you mentioned, you are going to be using backup devices in your SAN and that is bound to cause some issues.
I also imagine that when you say ISL group, you are talking about port channels. I believe that you can use up to 16 ports per port channel. If you are using 4 or 6 times 4 Gbps, thats normally a big enough pipe for most things.
I think that Cisco has come up with a winner with the 9124 and I hope to find a reason to get some.
I would work on a two seperate fabrics each offering redundant paths and use ISL's between two switches. Thats 48 ports per fabric and if you use 8 (four ports per switch) by 4 Gpbs, that 40 ports each per path. Considering the cost, thats pretty good.
The only thing that bothers me is the backup system that may only have one path to play with. I am going through a design phase of trying to implement one path into redundant fabrics. I see FSPF as being a key to that.
Most hosts these days really have trouble pushing more than 1 Gpbs per HBA and thats the reason Cisco use oversubscription.
Good planning is the key to making a good decision in this case.
In order to get a better picture of the environment I have a few questions for you :-)
Are all devices within the same data centre?
What type of tape drives and how many will be in the L500?
What are the specifications of the HDS array?
What types of hosts will be connected (sun/hp/win/linux)?
What will the general workloads be like?
Will all backups be LAN free?
Is iscsi an option?
Have you had much experience with LAN free backups? Is there some design guidelines for this? You know my email if you want to take it offline.
I found something I did not like about iSCSI and Windows. The iSCSI initiator on a Windows system wont make dynamic disks available automatically after a system reboot. The iSCSI disks have to be basic. Perhaps there is some Windows script that you can run during bootup that will make the dynamic disks come online but I don't know if that is possible. Perhaps Veritas can get around that?
We can take the lan free backups offline :-)
With iscsi however, what client software are you using? The native windows one of the Cisco one?
Microsoft iSCSI Software Initiator version 2.03.
Not supported for use with the Microsoft iSCSI Software Initiator:
(These are not supported by the Microsoft software iSCSI initiator; they may be supported by a hardware-based iSCSI initiator (HBA))
Dynamic disks (applies to Windows 2000 and Windows Server 2003)
Configuring volumes on iSCSI disks as Dynamic disk volumes using the Microsoft software iSCSI initiator is not currently supported. It has been observed that timing issues may prevent dynamic disk volumes on iSCSI disks from being reactivated at system startup.
Hardware-based iSCSI initiators (iSCSI Host Bus Adapters or ?HBAs?) can typically make the devices that it connects to available much earlier during the system startup process than the iSCSI software initiator can. Therefore, iSCSI HBAs may provide support for dynamic disk volumes.
I think Cisco has stopped offering iSCSI initiators to the public.
I have to see if Veritas fixes this problem as it is a pain not to use dynamic disks.
The Cisco one is still available and I am currently using it on a few win 2003 servers.
I just checked with the windows admin and they are using dynamic disks....
There hasnt been a release of the cisco iscsi driver for windows for 18 months... It certainly works tho :)
You do this just to annoy me.. :( I really wanted to complain about something when it comes to Windows. I miss my Solaris.
I will give the Cisco one a go.
Me thinks inch is a legend. I read the readme file and make the changes for the actiscsi.vbs and it worked.
I wonder if that will work with the Microsoft iSCSI driver...
First of all, I have no Brocade switches right now. I'm an engineer in a VAR, so now I'm acting as a consultant for the client firm. I need to create a new SAN from a scratch, and which devices will be bought depends mostly on my recommendations (and price of the solution of course).
The client have no proper understanding of the situation, and now the price is a key factor. I'd prefer to install something like MDS9506 or 9509 and eliminate this headache forever. But the client thinks is too expensive. Unfortunately we cannot convience him that they need a proper scalable solution, not a cheap one. So I must contrive a scheme based on cheap elements interconnected using a complex topology. Till now I was inclined to use 2+2 MDS9124 combination where the first pair is not connected to the second one at all.
The main concern however is tape library. That system is not bought yet, so I do not know how many drives it will contain. But I definetely know that every FC drive has only one FC port, and every server mush have access to any FC drive to implement a LAN-free backup scheme. LAN backup is not an option because backup and restore windows are small enough, so LAN cannot accomodate the speed needed. It seems that I'll need to connect FC switches using full mesh topology and use VSANs to route traffic properly without potential congestions on some ports/links. It will complicate the scheme seriousely, thus lowering its reliability, but I have no other choice.
All devices are in the same data center (single building, not more than 100-150 meters of link lenghts). FCIP/iSCSI are not needed, at least for now. I suspect that long range connectivity problems could arise in a future, but I cannot do anything about that. Well... It seems Cisco is not going to EOL its MDS9216 right now.
HDS array has 16 ports. I'll connect them to a pair of 9124 or to all of them, I have not decided it yet.
P.S. Cisco definetely made a great device. I mean MDS9124. Till now I used Brocade SW 3250/3850/200 in entry-level SAN installation, mainly because low price. Now I can use Cisco devices in the same price cathegory. Today it is hard to buy 9124 in Russia because Hitachi still did not entered its specs in the configurator, but they promise to do it in February.
I'm waiting for MDS 9148...
I am sure the 9216 is or was eol'ed and you can only get the 9216A now. I just got a few of them with the additional 48 port cards. Very nice piece of kit.
If all the storage is going to be on the HDS, I would get some cheap big SATA drives and virtualise them into the HDS and use them for backups through Shadow Image. Then get a backup server to break the pairs, mount the volumes and have as much time as you like to backup assuming the time is good enough.
You can get some very cheap SATA storage these days and as long as you can virtualise them, it has to be an option.
I share your pain with the backup library. I don't see the sense in complicating everything when there is a simple solution.
There are additional drive arrays (old Sun's T3s and 3510, as well as new middle range array SE6140). Yes of course additional drives in SE9990/USP100 would be the simplest solution, but they are too expensive. So they desided that they want a cheap intermediate storage. It is posible, and they have Open Volume Manager for external arrays virtualization, but they need to buy also ShadowImage licenses, and 6 Tb licenses cost ~$100k. I doubt they'd spend them.
One of the additional problems is the very big amount of data stored to tape. This is a bank, and they want to have a zero-time loss probability or so (i.e. they want to keep almost all the data even in case the whole system crashed). So they plan to make many copies of Oracle databases using complicated technics. Calculations show that even with 4 Gbps speeds the backup system hits time limits.
Moreover, they want to restore the whole system from tapes in a shortest period of time possible.
Well it is only a future project now. The current headache is SAN, so I need a solution which will not crash under a weight of traffic and have enough amount of ports to connect all the hosts and storage arrays. And at minimal cost! They pay, so they order music... :(
There is another option but it could be somewhat expensive. Inch and I have discussed it before. You could go for a Cisco Network Accelerated Serverless Backup approach by putting in a SSM module. You could get some serious throughput
I estimated that you could get about 4 TB an hour throughput.
I have been seriously thinking about it if my boss would let me purchase some more Cisco gear. It would stop the heartache of trying to incorporate our new VTL's into the SAN and also give our current Stotek backup libraries a fright.
I don't know how much they cost and even if anyone uses them.
I virtualised a whole bunch of 3510's (about 15 TB) into my USP 1100 and they did a good job. I had a number of SanBox 2 16 port switches in between them and they performed very well for what I wanted.
Just remind me not to use that bank....
SSM cannot be used because it requires modular switch/director chassis. :( Anyways the customer uses Veritas products, so it would buy Serverless backup option rather then a hardware module.
They are good guys... mostly, but sometimes think too high about themselves. Don't worry, they do not operate outside the borders of Russia. :)
Well, other than doing all sorts of interesting IVR with the smaller switches (and buying enterprise licences), I am out of ideas. Good luck.