SUMMARY: server usage

From: tgorby@emc.com
Date: Fri Jul 19 1996 - 08:04:06 CDT


Thanks for the 3 responces. Less than desired,
but grateful.

initial contact:

I would like to see some opinions and experiences
on variations of server deployment.

Given:
A. A large number of users (200 per group) with
   NFS served home directories and related tools
B. Users can be subdivided into 3 groups.
C. There are tools that are common to all groups.
D. Plans are to have the groups on separate subnets
   to reduce network traffic.

Choices:

A. A server with home directories and tools
   (one server per group) with common tools
   on another server.
B. One home directory server and one tool
   server per group with a third server with
   common tools.
   
Which choice provides the best performance?
What trade-offs have you seen/found?
Is there another choice not listed?

Responses:

1.

Assuming you are using a router between the subnets, you
should do everything you can to reduce the NFS traffic going
through the router since this can be a place for delays.
(For more info on this, see the Sun "Networks and File
Servers: A Performance Tuning Guide" December 1990, Page 8,
"Mounting File Systems".)

Ideally, for best performance, each group should have a
server for home dir's and tools (including common tools).

However, since it is easier for common tools to be shared
from one server, go with your option B (One home directory
server and one tool server per group with a third server with
common tools.) with the thrid server shared with all groups.
You should not see quantifiable degredation in performance.
====================================
2.

You might have a look at Auspex Systems or one of the high-end Sun
platforms. One of their servers should be more than adequate to handle
your needs.

To optimize performance, you'll need to profile your NFS traffic if you
hope to stripe the data access across servers. This is a non-trivial
analysis that will need to be repeated periodically as data access patterns
tend to change over time.

To optimize reliability, which will be somewhat at odds with performance,
you'll want to couple your servers to the workgroups they serve. This will
eliminate a single point of failure.
======================================
3.

The answer is generally not that clear cut. It depends on a alot of other
issues that include network usage of work groups, fail over and redundancy
modes and network information services usage and application. It sounds
like a good NIS+ model with multiple domains sharing common information
could answer most of the problems.

I have designed many client/Server models and the main key is seamless
redundancy. ie If one server goes down applications should automatically
switch to another server.

Your model 'A' works better with a lot of thought about design.
======================================

4.

If you want to reduce the global network traffic, keep as much
as possible local.

It's not so important, wether your home-servers are tool-server too.
Well, in some special cases those servers may get overworked,
but that's something you can better guess, when you would tell
the nature of the tools. ( Do they do heavy network I/O or
heavy disk-I/O or do they need much CPU-power (so you may need a
compute-server), or, or... )

My current guess would be:

3 subnets, each with local home/tool-server(s)
and a fourth "subnet" (via router) with the common tool server.

Looks like that:

                        Common
                        Server
                          |
                          |
                        Router
                       / | \
                      / | \
                     / | \
                   Net Net Net
                    A B C

It would be even better, if you could divide the common application into
local independend Parts. But is not always possible.

BTW: Buy a _fast_ router! :-)
=================================

thanks to alltgorby@emc.com



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:11:05 CDT