<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=us-ascii">
<META NAME="Generator" CONTENT="MS Exchange Server version 5.5.2654.45">
<TITLE>RE: [olug] SAN Information</TITLE>
</HEAD>
<BODY>
<P><FONT SIZE=2>The software depends on how fancy you want to be. If you use LUN masking etc. in the controller or switch then each host can only see the LUN the controller lets it see via hardware addressing on the loop. Now some say that is not a true SAN but I use the definition that multiple hosts are using one physical disk array so I call it a SAN. Now if you add software to the mix then a whole bucket of possibilities opens up. You can choose Veritas or the software from the SAN vendor. In this case the SAN is more like another host on the loop but it's job is to store and retrieve data to/from the disks. These more advanced SANs allow you to do all sorts of cool things like have been discussed (volume management, filesystem resizing, cache stripe optimization, disk block size masking and the like). In that environment the host never really "owns" the data is how I look at it. The data belongs to the SAN and it is served to the host as it requests it.</FONT></P>
<P><FONT SIZE=2>In my environment the host really owns the data because it probes the loop for the LUNs and then attaches to them and will get really mad if it does not see them (it is not a virtual volume). What is cool is that if you use fibre channel drives and fibre controllers they are all dual attached by design. So you can build a completely separate data path to the disks for redundancy if you want to. Again the more you add the more you pay. In Sun's case they have redundant interface software that watches for hardware failure and can switch to the secondary path if needed. This is basically Veritas software under the covers. The really big arrays spend a lot of effort to make them fast. Some big Hitachi units have over 10GB of disk cache (ram) that the CPU manages to allocate the writes in the most efficient manner. There is a world of difference between what I use and those units. We wanted something reasonable in cost, expandable, reliable and vendor neutral for upgrading in the future. So we built our system.</FONT></P>
<P><FONT SIZE=2>John</FONT>
</P>
<BR>
<BR>
<P><FONT SIZE=2>-----Original Message-----</FONT>
<BR><FONT SIZE=2>From: roger schmeits [<A HREF="mailto:schmeits@clarksoncollege.edu">mailto:schmeits@clarksoncollege.edu</A>]</FONT>
<BR><FONT SIZE=2>Sent: Friday, September 27, 2002 1:25 PM</FONT>
<BR><FONT SIZE=2>To: olug@olug.org</FONT>
<BR><FONT SIZE=2>Subject: Re: [olug] SAN Information</FONT>
</P>
<BR>
<P><FONT SIZE=2>Ok. lets say we build the SANs. Buy all the hardware ans so forth. Dont</FONT>
<BR><FONT SIZE=2>you need software to interface with the different o/s?</FONT>
</P>
<P><FONT SIZE=2>That where it gets pricey right? I understand the hardware part but I</FONT>
<BR><FONT SIZE=2>thought there had to something in between the servers.</FONT>
</P>
<P><FONT SIZE=2>Please correct me if I am wrong.</FONT>
</P>
<P><FONT SIZE=2>Congrads on building your own SANs ..impressive..</FONT>
</P>
<BR>
<P><FONT SIZE=2>On Fri, 2002-09-27 at 11:09, Rogers, John C NWD02 wrote:</FONT>
<BR><FONT SIZE=2>> HI all, </FONT>
<BR><FONT SIZE=2>> I know I have not made it to any meetings and only reply when I think I</FONT>
<BR><FONT SIZE=2>> have relevant information so here are some ideas on the SAN.</FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> At my office I designed and built a Fibre SAN that we use on our Solaris</FONT>
<BR><FONT SIZE=2>> boxes. It was built from scratch so we know every part in it and who</FONT>
<BR><FONT SIZE=2>> made it. In my research there are many vendors but they all would not</FONT>
<BR><FONT SIZE=2>> share the basic information with us about who made the controller, what</FONT>
<BR><FONT SIZE=2>> FC card are they using, what disk drives, can I get any drive and use</FONT>
<BR><FONT SIZE=2>> them... They like to sell the "System" so you are married to them from</FONT>
<BR><FONT SIZE=2>> that point on.</FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> Well I did not like the fact that if I purchased a 15 bay disk rack with</FONT>
<BR><FONT SIZE=2>> say 5 drives populated I had to purchase the drives and canisters from</FONT>
<BR><FONT SIZE=2>> the vendor to expand in the future. So I kept searching because all the</FONT>
<BR><FONT SIZE=2>> vendors were charging way over market value for standard off the shelf</FONT>
<BR><FONT SIZE=2>> disk drives (just for the hunk of sheet metal that holds the disk). I</FONT>
<BR><FONT SIZE=2>> found a very good vendor that produces the actual metal drive enclosure</FONT>
<BR><FONT SIZE=2>> and drive canisters. They will also sell you any controller from a list</FONT>
<BR><FONT SIZE=2>> they represent: (CMD Titan Series, Viper Series; Digi-Data Fibre Sabre,</FONT>
<BR><FONT SIZE=2>> 9500 Series, 9200 Series, 9100 Series, Fascore; Infortrend EON Series,</FONT>
<BR><FONT SIZE=2>> Sentinel Series, IFT-3102 Series, IFT-3101 Series; Chaparral K7413 Fibre</FONT>
<BR><FONT SIZE=2>> to SCSI, K5412 Ultra2 to Ultra2.</FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> I have to say that the metal work and board work on these cabinets is</FONT>
<BR><FONT SIZE=2>> first rate, they use three 300 watt power supplies, can do full SAFETY</FONT>
<BR><FONT SIZE=2>> monitoring of temperature and other environmental conditions. We have</FONT>
<BR><FONT SIZE=2>> purchased three boxes made by these guys two 18 bay and one 9 bay. Both</FONT>
<BR><FONT SIZE=2>> have been very well made. Supposedly they built the RAIDs for Yahoo (a</FONT>
<BR><FONT SIZE=2>> 9 bay Jaguar with DigiData controllers). </FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> Anyway we have about 1.4TB online on our two big racks and 144GB on the</FONT>
<BR><FONT SIZE=2>> small rack. We have a mix of drives from 36GB to the 181GB disks making</FONT>
<BR><FONT SIZE=2>> up the tiers/LUNS and have no problems. We chose the DigiData</FONT>
<BR><FONT SIZE=2>> controller SCSI to the disk, Fibre to the host, Emulex LP8000 on the</FONT>
<BR><FONT SIZE=2>> host adapter. You can make them as complicated as you want. LUN mask,</FONT>
<BR><FONT SIZE=2>> mask by WWN or use a smart FC hub or switch (Gadzooks or Emulex). I</FONT>
<BR><FONT SIZE=2>> think we have about $20K in our system so far but can replace disks at</FONT>
<BR><FONT SIZE=2>> any time with any size etc for a long life expectancy.</FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> If you want to know more email me offline and I can go in to the</FONT>
<BR><FONT SIZE=2>> specifics. If you are purchasing for a data center and want turn key</FONT>
<BR><FONT SIZE=2>> then I would look into the Sun T3 or A1000 or EMC and Hitachi but be</FONT>
<BR><FONT SIZE=2>> ready for the sticker shock. In those environments you are definitely</FONT>
<BR><FONT SIZE=2>> purchasing a "system" but it all depends on your expectations and</FONT>
<BR><FONT SIZE=2>> requirements.</FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> Hope I helped, </FONT>
<BR><FONT SIZE=2>> John </FONT>
<BR><FONT SIZE=2>> </FONT>
<BR><FONT SIZE=2>> Links are <A HREF="http://www.adjile.com" TARGET="_blank">http://www.adjile.com</A> <<A HREF="http://www.adjile.com" TARGET="_blank">http://www.adjile.com</A>> for the racks</FONT>
<BR><FONT SIZE=2>> and <A HREF="http://www.digidata.com" TARGET="_blank">http://www.digidata.com</A> <<A HREF="http://www.digidata.com" TARGET="_blank">http://www.digidata.com</A>> for our</FONT>
<BR><FONT SIZE=2>> controller. </FONT>
<BR><FONT SIZE=2>> </FONT>
</P>
<P><FONT SIZE=2>_______________________________________________</FONT>
<BR><FONT SIZE=2>OLUG mailing list</FONT>
<BR><FONT SIZE=2>OLUG@olug.org</FONT>
<BR><FONT SIZE=2><A HREF="http://lists.olug.org/mailman/listinfo/olug" TARGET="_blank">http://lists.olug.org/mailman/listinfo/olug</A></FONT>
</P>
</BODY>
</HTML>