SUMMARY: Distributing file systems with symbolic links

From: Scott Babb (
Date: Fri Nov 15 1991 - 17:32:48 CST

First, my apologies at the delay in posting this summary. When the new
NIC updated it's databases, my feed site disappeared, so many of your
responses bounced. Everything is fixed now. I wanted to try your
suggestions and post the actual results, but I've been too swamped to
actually do that. Sigh.


I have one heavily used disk on my file server (the /home disk), and I
want to split the user directories on that disk between two disks to
balance the load and avoid disk bandwidth saturation. I propose to do
this by creating symbolic links from /home/userA to /disk1/userA and
from /home/userB to /disk2/userB. I wanted to know what the
sun-managers-opinions were on the administrative overhead, the time hit
for passing through the symbolic link, etc.

THE RESPONDENTS (thanks, Managers):

Brian Styles <>
John Pochmara <>
Mike Raffety <>
Ted Rodriguez-Bell <>
adiron!tro@uunet.UU.NET (Tom Olin) (Eckhard Rueggeberg)
kevins@Aus.Sun.COM (Kevin Sheehan {Consulting Poster Child})
lemuria!uunet!!gerry (G. Roderick Singleton) (mark galbraith) (Peter Galvin) (William Unruh (Unruh)) (Ron Vasey)


Most people said to check out automount. Indeed, it looks like an
automated way to do exactly what I want.

Some sites were doing exactly what I suggested, and they're not noticing
any problems. Some sites are manually moving directories around and
changing password file entries, etc. Automount looks like a better
solution than either of thos options.

It was suggested that I avoid direct automount maps, since they can be
difficult to dismount at will. People felt that there is a miniscule
performance hit for translating the symlink, but the benefits far
outweigh that hit.

It was also suggested that I look into amd. I'm told that amd is an
improved automounter which is available via FTP from Alas, I
have no ftp access, but I may try to ftpmail it. (Does anybody know of
a good anon UUCP site?)

There was a question raised as to whether the bottleneck would be in the
actual disks or the SCSI bus. I agree that multiple SCSI busses will
get me better performance, but I don't think that one disk is capable of
putting out enough bytes/second to saturate a SCSI bus.

One response suggested that I not divide my disks up into partitions,
and just let bytes be bytes and let automount handle what bytes go
where. That sounds like a good idea.

I'll set up automount as soon as I can find the free time.
Unfortunately, that probably won't be for a couple of months. It looks
like automount will do everything that I want AND let me create a small
development sub-directory on each workstation's local disk so that the
users can compile and link files on their local disks and not load up
the network during builds. This would probably make a user's home
directory look something like this:

/home/server/scott -> /disk1/scott
                  /devel -> ws1:/export/home/ws1/devel
                            /RCS -> server:/disk3/product1/src/RCS

The idea being that /home/server/scott is actually on disk1 (transparent
to Scott) and that it contains files and subdirs and a development
subdir that is actually on Scott's local workstation disk. The RCS
databases are on the file server, but when Scott checks files out of
RCS, they get put on his local disk. This will make compiles much
faster than transferring all of the .c, .h, and .o files over the
network for each compile. The actual transfers will only happen when
Scott checks something in or out of RCS.

Thanks, all, you've given me exactly what I'm looking for, and more.


  These are solely the opinions of:  Scott L. Babb -
              "We didn't inherit the Earth from our parents,
                   we are borrowing it from our children."

This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:06:16 CDT