[olug] SAN Cluster Filesystem Question

Christopher Cashell topher-olug at zyp.org
Wed May 20 19:56:36 UTC 2009


On Wed, May 20, 2009 at 1:50 PM, Andrew Embury <drazak at ingenii.com> wrote:
> I have an application for content delivery where I need to allow 10-12 Linux
> based servers access to a single FibreChannel SAN Volume at the block level
> to distribute content.  I'm familiar with products like Veritas Clustered
> Storage Filesystem, Quantum's StorNext, and Apple's XSan2 that allow this
> kind of solution.  My question would be is there any native way in Linux to
> deliver this type of solution?  The most helpful responses for me would be
> based on experience as I've already done a fair amount of research via
> Google.

There are two major solutions in this space, and a couple of lesser
ones.  The big hitters (GPLed and included in the current stock Linux
Kernel) are Red Hat's GFS and Oracle's OCFS2.

I have much less experience with GFS, so I won't spend too much time
on it.  Of note, there are actually two variants, gfs and gfs2.  gfs2
was only officially declared stable and supported as of RHEL5.3.  It
is part of Red Hat's cluster suite, and has more dependencies and is
more complicated to setup and configure than OCFS.

Oracle's OCFS2 I have used in multiple setups with excellent success.
Packages are provided by Oracle for use with RHEL (also usable with
CentOS), among others.  Debian/Ubuntu support OCFS2 with native
packages.

Note, technically there is an OCFS (OCFS v1) and OCFS2 filesystems.
OCFS was never included in the mainline Linux kernel, and *is* Oracle
specific (usable for little more than Oracle data files).  OCFS2 is a
POSIX compliant general purpose clustered filesystem that is included
in the mainline kernel and is what pretty much everyone uses (if
they're using one of the two).

The first place I implemented OCFS2 was on a 2 note Oracle RAC
cluster.  We didn't actually use it for data files, but we did use it
for a shared /home directory across the boxes, and a shared archive
log location.  We tested it across two different fiber connected SANs
(EMC Clariion and StorageTek) with excellent results.  The OS was
RHEL4.

The second implementation never made it out of a test environment, as
I left the company before implementing it in production.  It was a two
node HA NFS server.  The OS was RHEL5.

The third implementation is a trio of Debian boxes that I'm still in
the early build phases on.

As for which one to choose, GFS or OCFS2, I can't honestly tell you
which is better.  I haven't used both of them enough (and in
production) to have a reliable opinion on it.  I can tell you that I
chose OCFS2 after doing a *lot* of research and investigation, and
some implementation testing on both.  OCFS2 is a clustered FS and
that's basically it.  There's not a lot of dependencies or additional
pieces needed.  GFS is part of the RH cluster suite, has more
dependencies, is more complicated to setup, and has more parts
involved.  If you were interested in making use of the full RH cluster
suite, then I'd suggest looking hard at GFS.  If you just want a
clustered FS, I'd suggest giving OCFS2 a shot.

Also, as was mentioned by someone else, it might be worth looking into
NFS, too.  I've had some excellent luck with hanging a box off of a
SAN and exporting the disks via NFS to dozens of servers.

> Drew

-- 
Christopher



More information about the OLUG mailing list