SUMMARY: Moving DiskSuite Raid Volume to another Machine

From: O'Connor, Joe D <OConnorJD_at_navair.navy.mil>
Date: Thu Jun 19 2003 - 08:00:45 EDT
My original post (below) asked for guidance in moving Disksuite 4.2.1 Raid 5
volumes (two raid volumes built on two soft partitions) from a 220R to a
280R which had root mirrors set up.  My thanks to Trent Petrasek who pointed
me to the link below referring to a 1999 Post which answered many of the
questions.  
 
 <http://www.deathwish.net/solaris/migrating_ODS_metadevices.html>
http://www.deathwish.net/solaris/migrating_ODS_metadevices.html
 
What I could not find is any reference as to how to re-establish the soft
partitions on the new machine. I can break the mirrors on the new machine,
re-establish the state databases, and then re-establish the Raid 5 using the
-k option in the md.tab file (after removing the lines referring to the soft
partitions).  I would then have the two raid 5 volumes established but no
soft partitions.  I found documentation stating that metarecover {slice} -p
-d could be used on a slice to read existing soft partitions and recreate
them in the state database but it made no mention of whether a raid 5 volume
could be used instead of a slice.  Any input on the soft partitions would
still be greatly appreciated.  
 
Thanks
 
Joe O'Connor
 
Original Post:.  
 
 Hi all, 

 
I need to move a Solaris 8, DiskSuite 4.2.1, Raid 5 volume from a 220R to a
280R.  The 280R has identical software with an existing mirror of the root
disk.  The disk controllers will be on different id's.  The 220R Raid 5
volume consists of 8 - 140GB third party drives with two soft partitions.
In addition, there are overlapping metadevice names.
 
I would like to preserve the RAID 5 data.  The data consists of over 6
million files and is extremely slow to re-load.  I do have the option of
removing the mirror on the target system, re-initializing the database,
adding the existing Raid 5 set using the -k option with metainit, and then
re-establishing the mirror.  
 
My approach so far would be:
 
1.  Remove the mirrors on the 280R.
2.  Copy the md.tab file from the 220R to the 280R and change the device
names to match the controller on the 280R.
3.  Add -k option to the md.tab line creating the Raid.  (But will the soft
partitions be left intact?)
4.  metainit -a
4.  Re-establish the mirrors.  
 
Also, I saw in the Sun Volume Manager guide something that indicated md.conf
would have to be edited to read one valid state replica.  It mentioned
adding the items below and then rebooting to to force Volume manager to
reload the configuration.
 
mddb_bootlist1="sd:71:16:id20"  (where sd:71 was the major name/minor number
of a valid state replica)
md_devid_destroy=1
 
My questions are:
 
1.  Is the above approach valid?
2.  Would the soft partitions be re-established?
2.  Do I need to make the modification to md.conf (The note that I saw was
for the newer Volume Manager)? 
3.  What is the simplest way to break the mirrors and re-initialize the
database?
4.  Given that there is such an overlap of metadevice names is breaking and
re-establishing the mirrors the logical choice?
 
Thanks
Joe O'Connor
 
Output from the metastat -p command for both systems is below:
 
RAID 5 to be moved (220R system):
 
metastat -p  
 
d1 -p d0 -o 1 -b 996147200
d2 -p d0 -o 996147202 -b 891289600
d0 -r c2t0d0s0 c2t1d0s0 c2t2d0s0 c2t3d0s0 
c2t4d0s0 c2t5d0s0 c2t8d0s0 c2t9d0s0 -k
 -i 32b  
 
Target system with existing mirrors (280R system) to receive the Raid
 
# metastat -p
d100 -m d0 d10 1
d0 1 1 c1t0d0s0
d10 1 1 c1t1d0s0
d101 -m d1 d11 1
d1 1 1 c1t0d0s1
d11 1 1 c1t1d0s1 
 
# more /kernel/drv/md.conf  
#
#ident "@(#)md.conf   1.7     94/04/04 SMI"
#
# Copyright (c) 1992, 1993, 1994 by Sun Microsystems, Inc.
#
name="md" parent="pseudo" nmd=128 md_nsets=4;
#                                                
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Thu Jun 19 08:04:06 2003

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:14 EST