From: Ray Van Dolson <>
Date: Thu Mar 26 2009 - 19:21:14 EDT
Thanks to the following people for your replies (hope I didn't miss

  - Anthony D'Atry
  - Alex Stade
  - Tom Lieuallen
  - Michael Greenbe
  - Matt Clausen
  - Nick Hindley
  - Ryan Anderson
  - Andrey Borzenko

 And even a shout out to a surly chap named "hike" who apparently
 didn't take too kindly to my post. :(

I'll hit the high points:

- Our basic idea of using two blades to talk to storage is a sound one
  of course keeping in mind that the blade center chassis is still a
  single point of failure.
- We can make use of Solaris Cluster 3.2:
  - This software is available for free
  - We'd have access to patches through our existing support contract
  - Actual *support* for Solaris Cluster from Sun is an extra cost
    ($50k is the number we heard from Naz at one point), and typically
    has a few strings attached:
    - Cluster needs to be EIS certified
    - This might even involve needing to have a Sun FE set everything
      up for us (at a cost).
    - $$$
  - Solaris Cluster 3.2 purportedly supports ZFS now.  QFS is also an
    option, and it sounds like some folks are using UFS even (slow).
  - Plenty of people out there happily using Veritas -- it's costly
  - There were some mixed opinions on Solaris Cluster's usability with
    some suggesting it was overly complex, and others describing a
    pretty easy setup experience.
  - Several recommended taking the Cluster class
- This won't be a "scalability" option (load balancing).  Only one host
  can be "master" at a time (nature of NFS).

Some more technical notes:

- Special care needs to be taken to make sure major/minor numbers on
  our block devices match up on each blade.  NFS clients rely on this
  and if the numbers change you'll see stale file handle errors.
- Failover is fairly transparent, but clients can still expect a 5-30
  second delay
- We shouldn't need to store state information regarding NFS
  connections unless we plan to support NFSv4.

We're going to attempt this first in a lab (virtualized environment),
but definitely aim for a Solaris Cluster 3.2 + ZFS solution.  There was
some technical advice regarding DID, PxFS and more that hopefully we'll
be able to apply as we get further into this.

It was good hearing all of your experiences.  Thanks for the time and

sunmanagers mailing list
Received on Thu Mar 26 18:21:59 2009

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:44:13 EST