SUMMARY: SunCluster 3.0 questions

From: <atsysmi_at_tin.it>
Date: Tue Jun 04 2002 - 09:29:40 EDT
Thanks to Steve Camp and to Abhilash. V.M., who gave the following answers
to my questions.
(I attached their original replies)
Roberto.


============ Steve Camp ============
> Any questions for who is using Sun Cluster solutions:

> Storage:
> --------
> Q1: D1000 array has 2 independent differential UltraSCSI channels, but how
> many interface boards ? (if the 2 channels are handled by a single
adapter,
> the D1000 is not a real HA architecture)

The D1000 array does in fact have only a single interface board.  However,
the
left hand side channels are isolated from the right hand side channels:
separate
wiring on the breadboard, separate SCSI controllers etc.  So, while the
right
hand side could suffer a failure, the left hand side could continue working
without problem.  But, yes, if the entire interface board fails, then the
D1000
will "fail", or, I should say, the disks will be inaccessible.  Previously
Sun considered the likelihood of a complete interface board failure so rare
that a single D1000 configured in split mode was a valid configuration for
Sun Cluster 2.2.  Now Sun requires two D1000s (not in split mode, and one
mirrored
to the other) for Sun Cluster 3.0.   In my opinion, a single D1000 running
in
split mode is a satisfactory low-end clustering solution, but it is NOT a
Sun supported
configuration.

> Q2: Is the new D2 array supported in a Sun Cluster 3.0 environment ?

I am unsure if the new D2 array is supported at this time.  If it is not, it
is
just a matter of time before it is supported -- it would be a qualification
issue
and whether or not they have completed all their regression testing with it.
If no one else comes up with a definitive answer, let me know -- I can drive
in
to Sun and check.

> Q3: If I build a 2-node cluster with a D1000 array (the 2 channels
> daisy-chained together in a single SCSI bus), can the D1000 be easely
> substituted with a D2 in the future (changing of course the SCSI board on
> the nodes) ?

I do not see any problems with this plan.  You probably would not have to
change SCSI HBAs if you did not want, however, you would not achieve the
full
bandwidth to the D2.  But you should be able to do this switch.

> Network:
> ----------
> Q1: In Sun Cluster 3.0, a NAFO group enables resilience from network
adapter
> failure. Does it increase the network bandwidth too (spreading the traffic
> across all the interfaces of the group, like multipathing does) ?

NAFO does ** NOT ** multiplex traffic.  NAFO -- network adapter failover.
It
does Failover between network adapters only.

> Cluster aware applications:
> ---------------------------
> Q1: Does anyboby of u confirm that RSMAPIs are available only w/ a SCI
> interconnect ?

Unfortunately, I do believe that RSMAPI will only work with SCI
interconnect.
I do not believe this will work with FastEthernet or GigEthernet.

> Q2: Is Netra 20 h/w compatible with SCI (or PCI-SCI) interconnect in a Sun
> Cluster 3.0 environment ?

I do not believe the PCI-SCI interconnect is supported in the Netra 20.  I
believe
this is a support / qualification issue:  I am unaware of any reason why the
PCI-SCI
cards would NOT work in the Netra 20.  It may become supported in the
future.
Possible problems, however:  are there enough PCI slots in the Netra 20 to
support
two PCI-SCI HBAs plus the requisite number of storage HBAs?

> Q3: Using RSMAPIs "An application on node A can export a segment of
memory,
> which makes it available to applications on node B"; is this segment of
> memory available to node B applications after a node A failure (at power
off
> too) ?

I do not believe that the RSMAPI itself mirrors shared memory between
different
systems.  So, if node A dies, any memory it was sharing dies with it.  You
could
effect mirrored, shared memory, but I believe the cluster aware application
would
have to perform any "shared memory mirroring".


============ Abhilash. V.M. ============
> Storage:
> --------
> Q1 : My understanding is that D1000 can be interfaced
> to two different nodes thru their independant scsi
> interfaces.
>
> Q2 : We use T3s in our Cluster environment. D2 is a
> newer product, and thus it must be supported.
>
> Q3 : Technically thinking, it should be possible.
>
> Network
> ----------
>
> Q1 : Sun cluster 3.0 has two sorts of resources.
> Scalable as well as fail over. IP Multipathing is
> possible on the H/Ws like SunFire, with O/S above
> solaris 8 4/01. But IP Multipathing within a nafo
> group is something which I've never seen implemented
> anywhere.
>
> Application.
> ----------
>
> SC 3.0 cool stuff CD provides you packages like
> SUNWscrt etc using which you can easily make your
> application cluster aware. But memory mapping between
> servers would virtually slow your application down.
_______________________________________________
sunmanagers mailing list
sunmanagers@sunmanagers.org
http://www.sunmanagers.org/mailman/listinfo/sunmanagers
Received on Tue Jun 4 09:38:18 2002

This archive was generated by hypermail 2.1.8 : Thu Mar 03 2016 - 06:42:45 EST