Thanks to the many respondents. There were three basic responses which
present very different ways of doing what we would like to do. Here are the
1) Don't. Since both nodes of an OPS cluster are active at the same time,
this is not necessary. Use the Oracle client software (OCI with tnsnames)
or the middleware layer do this.
2) Use a load balancer ( Cisco, Alteon, Arrowpoint, BigIp) to direct traffic
two one node of the active nodes.
3) Create a logical host that accepts connections to the database. Use
shared disk groups for the datafiles (no filesystems). Fail the logical host
and use an HA-NFS filesystem to hold archive logs that can be failed with
the logical host.
Here is the complete response for the last example which we will be testing
(actually from two sources) I indicate the different repsonses.
A) Two hosts are called eftpri and eftbak (these are bad names to start
with as they imply a master/slave relationship - both nodes in an ops
cluster are equal).
The design of the disk groups are:
This is a shared disk group that contains partitions for all the oracle raw
devices; for example data files, indexes, redos, rollback segments. There
should never be any UFS filesystems within this disk
group. All the volumes are used as raw partitions for oracle. This disk
group is present on both nodes in the cluster. There is no logical host
associated with this disk group. That's a design concept of OPS.
In the OPS design you have two database instances - one on each node. These
instances both speak to the same database (the database residing within the
shared disk group) - the lock manager takes care of any possible problems
with updates etc.
Now, because we have two database instances (one per machine), we need to
have archive log areas available on both machines. In addition to this, in
the event of one node failing, you have to have the archive log area made
available to the second node. The way we do this is to create two disk
groups, exp_dg and exp2_dg. These disk groups contain the filesystems for
the archive logs. We then use the HA-NFS infrastructure to move the
filesystems between the nodes in the event of failure. Because we have
HA-NFS, we need a logical host attached to each of the disk groups. So what
we have is this:
Node Logical Host DB-Instance Disk-group Archive log
eftpri eftexp DBEFT exp_dg
eftbak eftexp2 DBEFT1 exp2_dg /archlog/eftbak
In the event of a node failing, the HA-NFS process will move the archlog
filesystem to the second node.
You also require an exports partition. This is only required on one node,
but in the event of failure, it should be made available on the second node.
Again HA-NFS is used for this:
Node Logical Host DB-Instance Disk-group Archive log
Eftpri eftexp3 n/a exp3_dg /export/DBEFT
(Note - in this setup you want to run the exports on the node that is taking
the queires/requests from clients - otherwise you will incur pinging at the
Oracle level - this can hose your system).
So, now you have an ops database setup - including an archive area and an
exports area. If a single node fails the second node will have all the
archives and exports available to it.
On accessing the database, we decided that we only wanted one of the
machines being accessed with respect to the database. The main reason for
this is that the application that was being run on the machine was not
designed for parallel query across instances (out with our control). If the
machine that all the queries were being directed to was down, we'd want the
second machine taking the traffic. We wanted this to be transparent to the
clients. To do this we had to configure all the clients with the same IP
address to connect too. The only way that we can get this IP address to
reside on one node and failure to the second node is to use a logical host.
This logical host doesn't need any disk groups attached.
How the clients actually connect to the database via the logical host is
either via SQLNET or via the Oracle Call Interface (OCI).
B) We have 2 physical hosts (A and B) and 2 logical hosts (X and Y),
and 1 disk array, that is visible by A, B, X and Y... Cluster 2.1, Solaris
2.6 and Oracle 7.3.4. Every host has his own IP... and in the array, is one
Oracle DB, the is visible by every host, and X"FS" that is visible by X (and
physical host where X is) and Y"FS" that is visible by Y...
In the tnsname.ora we have A.world, that points to A and then B, B.world ->
B -> A. and for logical hosts: X.world -> X -> Y and Y.world Y-> X, because
our users attach to logical host, and oracle connections will be in the same
host. We asked about X.world be: X -> Y -> A -> B, because X and Y may stay
in the same machine, with listner off-line... but...
You could put oracle connections in only one machine for performance.
Norwest Financial Information Services Group
email pager email@example.com
U BEFORE POSTING please READ the FAQ located at
and the list POLICY statement located at
A To submit questions/summaries to this list send your email message to:
A To unsubscribe from this list please send an email message to:
E and in the BODY type:
R unsubscribe sun-managers
unsubscribe sun-managers firstname.lastname@example.org
L To view an archive of this list please visit:
This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:14:10 CDT