SUMMARY : VCS 1.2 - Configure NFS Service

From: Ed Crotty (ecrotty@vantage.com)
Date: Wed Nov 15 2000 - 15:58:42 CST


Got a lot of good help on this one...

I have a pdf version of the 1.1 quick start if anyone wants that (I'm
not sure if everyone wants me to pump that through the list)... from
Vince Gonzalez..

Basically a overwhelming theme was to make sure the major and minor
numbers matched up... Summaries / techniques / tips below...

Thanks to :

Glenn Richards (can't give enough BIG thanks.. he helped out with a
bunch of other vcs tasks such as how to add mount points to an exisiting
group - included in the summary)
Vince Gonzalez
Mike Marcell

Original Question :

I have two machines that currently meet these details :

 couple of 450s
 have two shared d1000's
 scsi-initiator-id'd properly (no resets down the bus etc...)
 vcs on both hosts
 volume manager on both hosts
 a diskgroup / volume called nfsdg / nfsvol
 and a mount called /nfsshare on both machines...
 a virtual ip waiting to be used for the nfs service

 so, im all set to go :)

 i was just wondering if someone had some pointers / scripts / online
 references / etc for configuring a nfs service in vcs 1.2... it should
 be pretty straightforward i imagine...

 thanks!

 -ed

Respones :

Vince Gonzalez
===============
Make sure that the disk devices on both systems have matching
major/minor numbers. This is necessary to allow failover to work
transparently to clients (this one bit me a while back).

Mike Marcell
============

I don't have any of what you're looking for, but I'm in the process of
 setting up VCS here, and I was recently told to upgrade to vcs 1.3, as
 everything below that was very un-stable. Not sure, but I thought that
I'd pass it along.

(if anyone wants it he sent it to me)

Glenn Richards
==============

Ed, I was confronted with the same issue when I started with 1.2...
 Veritas' documentation isn't too explicit when it comes to NFS mounts.
 Although the Mount agent, if used, will mount the NFS filesystem, it
will
 fail, as it cannot monitor it.

 -- Anyway --

 What I did, was place my NFS mounts in the automounter table, and build
a script agent to start autofs via VCS, rather than via the RC2 script.
 Basically, what I do is take the start/stop script and build the online
and offline scripts from it (substituting start and stop respectively),
and set the monitor script to a fancy grep for automountd, returning 110
or 100 depending upon whether it is up or not.

 I can give you specifics if you need them, or if someone has a better
idea, I'd like to hear it.

 Anyway..... Here are the configs that I told you I would get for you...
 I'll take this in detail as a roadmap for creating the script agents in
 Veritas (without all the extra static you get when you read the Agent
 Developers Guide! =-> )

 Objective: Provide for NFS mounts in a VCS Cluster

 System Configuration: 10 systems running VCS (In this case a pair of
E10Ks 5 domains each)
                 NFS Servers (Domain4) and (Domain8)
                 NFS Clients (Domain0 - Domain9)

 Pre-work you have to do:
 1. Setup your NFS servers with the appropriate DiskGroup, Mount, and
Share resources.
 2. mv /etc/rc2.d/S74autofs /etc/rc2.d/vcs.S74autofs ### This is to move
 the autostart at system boot out of the way.
 3. Setup your automounter tables on the client machines, so that they
will
 automount the files from the NFS servers the way that you want them to.

 VCS Configuration Steps:

 1. mkdir /opt/VRTSvcs/bin/Autofs
 2. cp /opt/VRTSvcs/bin/ScriptAgent /opt/VRTSvcs/bin/Autofs/AutofsAgent
 3. Create /opt/VRTSvcs/bin/Autofs/online ### This is the
 script to bring autofs online

 #!/bin/sh
 /usr/lib/autofs/automountd </dev/null >/dev/msglog 2>&1
 /usr/sbin/automount &

 4. Create /opt/VRTSvcs/bin/Autofs/offline ### This is the script to
 bring autofs down

 #!/bin/sh
 /sbin/umountall -F autofs
 /usr/bin/pkill -x -u 0 automountd

 NOTES to 3 and 4: These look suspiciously familiar to the code within
the S74autofs script!!!

 5. Create /opt/VRTSvcs/bin/Autofs/monitor ### This is the script that
 executes to verify that automounter is running

 #!/bin/sh
 ps -ef |grep "/usr/lib/autofs/automountd" | grep -v grep > /dev/null
2>&1
 if [ $? -eq 0 ]
 then exit 110
 else exit 100
 fi

 NOTE to 5: The exit codes are IMPORTANT, 110 for online, 100 for
offline!

 6. Create /etc/VRTSvcs/conf/config/Autofs.cf ### This file is used as
an alternative to the types.cf file. If you intend to use types.cf, you
will have to shutdown the entire cluster to make things take effect...
Not so easy a thing to do! I prefer using the "specialized" .cf files,
as you can set these up on the fly.

 type Autofs (
         static str LogLevel = Error
         static str ArgList[] = { }
         NameRule = ""
 )

 7. Add the Resource type to the VCS configuration:

 haconf -makerw
 hatype -add Autofs ### Adding new type
 hatype -modify Autofs SourceFile "./Autofs.cf" ### Set the source file
to your new file name here.
 hatype -modify Autofs LogLevel Error ### Set the loglevel to
 error, just in case something fails.
 hatype -modify Autofs ArgList -delete -keys ### The following are
 defaults
 hatype -modify Autofs NameRule "\"\""
 hatype -modify Autofs AttrChangedTimeout 60
 hatype -modify Autofs CloseTimeout 60
 hatype -modify Autofs CleanTimeout 60
 hatype -modify Autofs ConfInterval 600
 hatype -modify Autofs MonitorInterval 60
 hatype -modify Autofs MonitorTimeout 60
 hatype -modify Autofs NumThreads 10
 hatype -modify Autofs AgentPriority ""
 hatype -modify Autofs AgentClass TS
 hatype -modify Autofs ScriptPriority ""
 hatype -modify Autofs ScriptClass TS
 hatype -modify Autofs OfflineMonitorInterval 300
 hatype -modify Autofs OfflineTimeout 300
 hatype -modify Autofs OnlineRetryLimit 0
 hatype -modify Autofs OnlineTimeout 300
 hatype -modify Autofs OnlineWaitLimit 2
 hatype -modify Autofs OpenTimeout 60
 hatype -modify Autofs RestartLimit 0
 hatype -modify Autofs ToleranceLimit 0
 hatype -modify Autofs AgentStartTimeout 60
 hatype -modify Autofs AgentReplyTimeout 130
 hatype -modify Autofs Operations OnOff
 hatype -modify Autofs FaultOnMonitorTimeouts 4 ### End of Default
values.

 8. Alter the attributes of the Autofs Resource Type

 haattr -default Autofs AutoStart 1 ### This is a Default Value
 haattr -default Autofs Critical 1 ### This is a Default Value
 haattr -default Autofs Enabled 1
 haattr -default Autofs TriggerEvent 0 ### This is a Default Value
 haattr -default Autofs ResourceOwner unknown ### This is a Default
Value

 9. Create the group that will hold your Autofs resource.

 hagrp -add autofsgrp
 hagrp -modify autofsgrp SystemList domain0 0 domain1 1 domain2 2
domain3 3
 domain4 4 domain5 5 domain6 6 domain7 7 domain8 8 domain9 9
 hagrp -modify autofsgrp Parallel 1 ### Mine is
 parallel, as I want it running on all 10 domains, your
config ### might be different
 hagrp -modify autofsgrp SourceFile "./main.cf"

 10. Create the resource for Autofs and Enable it.

 hares -add P-Autofs Autofs autofsgrp
 hares -modify P-Autofs Enabled 1
 hares -modify P-Autofs AutoStart 1
 hares -modify P-Autofs Critical 1

 11. Link the Autofs group to your NFS group(s) on the remote system(s)
(we will call them nfsgrp1 and nfsgrp2)
 hares -link P-autofs nfsgrp1 online global firm
 hares -link P-autofs nfsgrp2 online global firm
 haconf -dump -makero ### Close the configuration
 to editing.

 NOTE to 11: The link here is "online global firm" because I have my nfs
 groups fail to other systems in the event of a catastropic event...
This just maintains the ability to restart the autofs group in the event
of start with the primary nfsgrp server down.

 12. Copy everything, everywhere:

 Copy the following to all nodes in the cluster:
 /etc/VRTSvcs/conf/config/Autofs.cf
 /opt/VRTSvcs/bin/Autofs

 13. Start the Agents, everywhere:

 haagent -start Autofs -sys <FILL IN THE BLANK, A perfect use for a for
 loop!>

 That gets you where you need to go, even in a complex environment.

 RESULT:

 From a cold start of the cluster, your NFS servers will start their
disk groups, mounts, and shares, then the AUTOFS group will kick off on
all remote hosts, allowing the mounts to be satisfied over NFS.

 Good Luck! Happy Hunting!

I usually use the command line to define mount points, i.e.

 hares -add PM-appl Mount nfsgrp1 ### Existing Group
 hares -modify PM-appl Enabled 1
 hares -modify PM-appl Critical 0
 hares -modify PM-appl MountPoint "/export/appl"
 hares -modify PM-appl BlockDevice "/dev/vx/dsk/diskgrp1/applications"
 hares -modify PM-appl FSType vxfs
 hares -modify PM-appl MountOpt rw

 For your information, VCS has an oracle add-on that manages a lot of
the Oracle stuff.

Yes, main.cf will be updated on all nodes after you issue the commands,
and then "-dump" the configuration to disk. Main.cf at VCS startup is
copied into memory on all nodes, and is updated at some internally
defined schedule.

  use the command:

  haconf -makerw

  to make it writeable, then:

  haconf -dump -makero

  to commit the changes and make the configuration read only again.

S
U BEFORE POSTING please READ the FAQ located at
N ftp://ftp.cs.toronto.edu/pub/jdd/sun-managers/faq
. and the list POLICY statement located at
M ftp://ftp.cs.toronto.edu/pub/jdd/sun-managers/policy
A To submit questions/summaries to this list send your email message to:
N sun-managers@sunmanagers.ececs.uc.edu
A To unsubscribe from this list please send an email message to:
G majordomo@sunmanagers.ececs.uc.edu
E and in the BODY type:
R unsubscribe sun-managers
S Or
. unsubscribe sun-managers original@subscription.address
L To view an archive of this list please visit:
I http://www.latech.edu/sunman.html
S
T



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:14:23 CDT