SUMMARY : Need info on RAID drives

From: Gary Lopez <gary_at_catapult.com>
Date: Mon Jun 27 2005 - 13:29:43 EDT
Thanks to Wesley garland, Tim Chapman, and Brian Miller.

ORIGINAL QUESTION:   I have a question that I need an answer to if 
anyone knows the answer.
Basically, my E420R contract is up for renewal with SUN. My boss wants
me to consider some options including having a E280R with a disk array
attached as a standby. Here is my problem.
         My disks (8-18gb SCSI, 4-36gbSCSI) are striped and mirrored 
using disksuite. If for some reason the 420R went down is it possible to 
move the disks into a E280R with an Array or will the mirror be confused
because the controller addresses are now different? Will disksuite even
care? Is there a way to successfully bring these disks up on the E280R,
if say, the E420R motherboard crashes?

ANSWERS ...
#1) I'm pretty certain, if you transplant the disks to/from the same 
location, you should be fine.  I think the only stuff that might cause 
issues / require verification would be settings stored in the boot 
PROM.(ie, which boot disk to use, etc). However, such things can be 
verified / tested .. and then set appropriately.  This is of course 
assuming the hardware is more-or-less the same (so you want to have OBP 
flashed to same revision on the production and spare system ; same type 
of add-on-hardware present in both, if any is required -- or of course 
be sure to install the stuff before powering up, otherwise the device 
trees might get clobbered/ rebuilt and potentially inherit "change" in 
the process.  ie, I've seen a case where I

-moved a1000 from sunblade 100 (A) to blade100(b)
-booted up the blade (A) once without the array attached, reconfigure 
reboot so I lost my device paths
-re-attached the array, it now had a different identity after the next 
reconfigure-reboot.

So.. ideally you get the hardware "the same" before booting. If you have 
to, however, the tweaks required should be fairly minimal.

#2) I have successfully moved a mirrored drive from one v240 to another, 
been able to rebuild the drive, and get the system to work without a 
problem.  But I was only using disk mirroring, no striping.  I was able 
to do this without first breaking the software mirror (i.e., I just shut 
one system down, moved one drive to the other system, and brought both 
systems back up, one at a time).  I brought the second system up without 
attaching a network cable.  After rebuilding the mirrors, I did a 
"sys-unconfig" on the second system, connected the network cable, and 
brought the system up, changing its IP and server name.

I was also fortunate to be able to test this in a non-production 
environment.  I don't think I would even attempt it if the RAID 
controllers weren't identical.

The only way I would be willing to do this is if I could test the 
procedure in a non-emergency situation, which means you should have a 
spare set of disks on which to test.  But if I had that extra disk space 
case, i would just run the E280R, and use rsync to keep the files on the 
backup system in sync with the production system.  After all, what 
happens if a couple of disks die on your production system?  Having that 
extra processor and motherboard won't do you any good at that point.

#3)Here's how you do a really good job with a hot standby with SDS/SVM. 
This is similar to the procedure for VxVM, but Veritas is a little more 
tolerant of hardware/software mismatches:

1. Use External disks. I like Sun A5x00 arrays; they are cheap [on eBay] 
and robust.
2. Disks get connected to BOTH hosts (using either FC_AL or 
multi-initiator SCSI)
3.You MUST have the exact same OS version and FC_AL controllers. 
Preferably using the same slots.
4. Disks MUST have same cxtxdx numbers on both hosts. Modify 
/etc/path_to_inst to achieve this if you're very brave; can also modify 
pci slot scan order in EEPROM.
5. Set up both disks in a shared metaset
6. If you are using the latest Solaris 9, you can have "autotake" 
metasets. Otherwise, your mount-at-boot scripts will need to a "metaset 
-s SetName -t"
7. If one box fails, do a "metaset -s SetName -t" on the other, and 
mount the storage. No hardware switching necessary!

In GENERAL -- SVM/SDS sets are a bitch to move from one machine to 
another, especially without the initial configuration of both-at-once as 
described above. If you are really stuck, you can carefully recreate the 
disk set on the other box, but you risk losing all your data.

-- 
Gary D Lopez
Unix Systems Administrator
Catapult Communications
160 S Whisman Rd
Mountain View, CA 94041
Ph  (650) 314-1029
Fax (650) 960-1029
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Mon Jun 27 13:30:16 2005

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:43:49 EST