SUMMARY: Disksuite 1+0 Volumes

From: Ryn <matty91_at_mindspring.com>
Date: Sun Jun 10 2001 - 15:24:37 EDT
I received a few more replies after my last
SUMMARY and it turns out
Disksuite 4.2 does 1+0. Thanks goes out to:

John Philips
Marco Greene
Darren Dunham
Donald T. Harris
Kevin Colagio
Damon LaCaille

and especially Chris Miles. Chris conducted an
experiment that proves
1+0 is done by default. His experiment is attached
below.

- Ryan

Chris Miles experiment
The Stripe/Mirror Redundancy Experiment with
Disksuite.

    2000-03-07 Chris Miles <cmiles@connect.com.au>
    $Id: stripemirror_experiment.txt,v 1.2
2000/03/06 23:57:45 cmiles Exp $


BACKGROUND:
    RAID 0 means combining multiple hard disk
drives together to form a larger
    volume set.  The drives are combined either by
concatanating them or,
    preferably to balance load across all drive
and increase throughput,
    by striping them.  Thus a large logical disk
is created.

    RAID 1 provides redundancy by mirroring all
data across 2 sets of disks.
    Twice the number of disks are required, but
any one disk can fail with
    no loss of data or interruption to service,
and up to half the total number
    of disks can failure so long as no two disk
pairs fail.

    RAID 0+1 combines striping and mirroring so
you get the advantages of both,
    large logical filesets and redundancy.  RAID
0+1 indicates that two
    identical stripe sets are created (RAID 0)
which are then set up as mirrors
    of each other (+1).

RAID 0+1 example
                 +------+     +------+
+------+
              ---|  d1  | //  |  d2  | //  |  d3
|     stripe set 1 (RAID 0)
              |  +------+     +------+
+------+
      mirror -|
    (RAID 1)  |  +------+     +------+
+------+
              ---|  d4  | //  |  d5  | //  |  d6
|     stripe set 2 (RAID 0)
                 +------+     +------+
+------+

    RAID 1+0 is the same concept as 0+1 (mirror of
stripes) but differs in
    implementation.  Instead of stripe sets being
mirrored, pairs of disks
    are mirrored and the logical mirror of each
drive is used to create the
    stripe.

RAID 1+0 example
           m    +------+    m    +------+    m
+------+
           i  --|  d1  |    i  --|  d2  |
  i  --|  d3  |
           r  | +------+    r  | +------+    r  |
+------+
           r--|          // r--|          // r--|
<--stripe
           o  | +------+    o  | +------+    o  |
+------+
           r  --|  d4  |    r  --|  d5  |
  r  --|  d6  |
           1    +------+    2    +------+    3
+------+

    Conceptually, RAID 0+1 and 1+0 are different,
and 1+0 actually offers more
    redundancy.  Consider the failure of drive d4
in the diagrams above.  In
    the RAID 0+1 setup, a failure of d4
invalidates the whole of stripe set 2
    (a stripe set cannot recover from any disk
failure) leaving valid data on
    stripe set 1 only.  Only 3 disks are
effectively useful now.

    In the RAID 1+0 example, a failure of d4
simply means that mirror1 has lost
    one side of its mirror, without effecting any
of the other two mirrors.
    Thus the other 5 disks still contain valid
data and are useful.

    Consider a second failure, of drive d2.  In
our RAID 0+1 example, losing d2
    would invalidate stripe set 1 leaving no valid
drives left and our whole
    drive set is lost.  In RAID 1+0, losing d2
only effects mirror2, which can
    still survive with its second submirror (d5)
meaning that the whole disk
    set is now down to 4 valid drives (d1, d5, d3
and d6) but will still
    function happily.

    Thus RAID 1+0 is the preferred method of
implementing stripe/mirror disk
    sets.


AIM:
    To prove whether or not Solaris Disksuite can
implement RAID 1+0 or not.


THEORY:
    Solaris Disksuite allows the user to create
striped disk sets (RAID 0) of
    multiple physical disks.  It also allows the
user to create mirrored sets
    of physical disks or metadevices (logical disk
sets).

    Creating a RAID 0+1 setup requires creating
two striped disk set
    metadevices, then creating a mirror of these
two metadevices, which is
    standard practice with Disksuite.

    Creating a RAID 1+0 conceptually requires
creating multiple mirrors of
    disk pairs, then striping together the mirror
metadevices.  Creating
    multiple mirrors of pairs of disks is fine,
the problem is that Disksuite
    does not allow stripes of metadevices.
Members of a stripe must be
    physical disks, hence RAID 1+0 appears not to
be possible with Disksuite.

    However, the author was informed that
Disksuite would actually implement
    RAID 1+0 by creating a standard mirrored
stripe set (which would
    conceptually seem to be RAID 0+1).  Apparently
Disksuite automatically
    implements 1+0 as required.  This experiment
tries to determine whether
    this fact is true.


METHOD:
    Disksuite 4.2 was installed under Solaris 7 on
a Sun Ultra 5 with
    PCI dual-SCSI controller.  A multi-pack
containing 6 2GB drives was used
    as data disks for the experiment.

    Firstly, a conceptual RAID 1+0 set creation
was attempted by first creating
    3 mirror metadevices of pairs of disks, then
attempting to create a stripe
    of the mirror metadevices.  As the theory
states, Disksuite does not allow
    this, as stripes cannot be made out of
metadevices.

    Next a standard mirrored stripe was created by
first creating two stripe
    metadevices, each containing 3 physical disks.
These two stripe
    metadevices were then mirrored together,
formatted, synced and mounted.
    The disksuite setup looks like:

d111 -m d121 d122 1
d121 1 3 c2t2d0s2 c2t4d0s2 c2t8d0s2 -i 32b
d122 1 3 c2t10d0s2 c2t12d0s2 c2t14d0s2 -i 32b

    and diagramatically looks like the RAID 0+1
example diagram above.

    To make the disks busy, a simple command was
executed to continually write
    data to the disk set:

# cp /dev/zero /mnt/test.file

    All disks were confirmed to be busy writing,
then disk d2 (see diagram) was
    physically yanked from the multi-pack.  After
a short pause while the OS
    waited for the disk to recover (and
subsequently gave up) Disksuite marked
    d2 as bad, put it out of action, and continued
to write data to all _five_
    other disks.  Instantly this all but proved
that Disksuite had implemented
    RAID 1+0.

    Continuing the tests anyway, disk d4 was then
yanked and shortly after
    Disksuite marked the disk bad and continued
writing to the other four
    disks.  Finally, yanking disk d6 with the
other 3 disks still active
    proved that Disksuite had indeed automatically
implemented RAID 1+0, as
    neither "stripe set" died when a disk was
considered dead.

    As a final test a fourth disk was yanked from
the multi-pack (disk d5)
    which Disksuite could not automatically
recover from (as expected).
    Interestingly, with all disks pushed back in
Disksuite recovered cleanly
    and continued writing with no interaction.
Although metareplace commands
    were required to put the other 3 disks back
into service.


CONCLUSION:
    Solaris Disksuite 4.2 will automatically
implement RAID 1+0 when a mirrored
    stripe set is created.  This is the ideal
setup for best redundancy under
    those circumstances.


RESOURCES:

http://www.sunworld.com/sunworldonline/swol-06-199
9/swol-06-raid1_p.html
Received on Sun Jun 10 20:24:37 2001

This archive was generated by hypermail 2.1.8 : Wed Mar 23 2016 - 16:24:56 EDT